source_id
int64 1
4.64M
| question
stringlengths 0
28.4k
| response
stringlengths 0
28.8k
| metadata
dict |
---|---|---|---|
336,565 | Last I encountered a SOAP based service was during my internship in a financial firm in 2013. That was the time when I started my career in IT. I remember having some study material about SOAP in one of my engineering course. Outside of that, I haven't used SOAP much during my career. I am asking this since the question of "Difference between SOAP and REST" came in one of my recent interviews. From what I know (and what I found on Google) SOAP is a protocol with tight coupling between client and server for information interchange which is closely related to business logic. Whereas REST is more flexible stateless architecture for data transfer. Can someone please correct me if I am wrong about this difference between SOAP and REST? Also, what is the present-day significance of SOAP? Are people still developing new SOAP-based APIs, or it's mostly a legacy now? | REST is indeed an architectural style. SOAP is a data protocol. The distinction is important; you cannot compare them directly. The primary purpose of REST is to represent resources on the Internet, and to provide mechanisms for discovering them. In contrast, SOAP is used for communicating structured data between computers, and that's all it really does. Note that you don't actually need REST to create a client/server relationship between two computers on the Internet. All you need is a mechanism that transfers JSON or XML, and you don't even need that if you're willing to be incompatible with everyone else. Nevertheless, SOAP has fallen out of favor for new, public-facing API's, though it is still commonly used for B2B applications because you can define a "data contract" with it. JSON web services have the virtue of being rather lightweight and flexible, and since Javascript recognizes JSON natively, it's a natural choice for browsers. But none of that has much to do with REST, really. Further Reading Is REST better than SOAP? (good article, even though it incorrectly calls REST a protocol). The Richardson Maturity Model | {
"source": [
"https://softwareengineering.stackexchange.com/questions/336565",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/254474/"
]
} |
336,713 | I still consider myself as an apprentice programmer, so I'm always looking to learn a "better" way for typical programming. Today, my coworker has argued that my coding style does some unnecessary work, and I want to hear opinions from others.
Typically, when I design a class in OOP language (Usually C++ or Python), I would separate the initialization into two different parts: class MyClass1 {
public:
Myclass1(type1 arg1, type2 arg2, type3 arg3);
initMyClass1();
private:
type1 param1;
type2 param2;
type3 param3;
type4 anotherParam1;
};
// Only the direct assignments from the input arguments are done in the constructor
MyClass1::myClass1(type1 arg1, type2 arg2, type3 arg3)
: param1(arg1)
, param2(arg2)
, param3(arg3)
{}
// Any other procedure is done in a separate initialization function
MyClass1::initMyClass1() {
// Validate input arguments before calculations
if (checkInputs()) {
// Do some calculations here to figure out the value of anotherParam1
anotherParam1 = someCalculation();
} else {
printf("Something went wrong!\n");
ASSERT(FALSE)
}
} (or, python equivalent) class MyClass1:
def __init__(self, arg1, arg2, arg3):
self.arg1 = arg1
self.arg2 = arg2
self.arg3 = arg3
#optional
self.anotherParam1 = None
def initMyClass1():
if checkInputs():
anotherParam1 = someCalculation()
else:
raise "Something went wrong!" What is your opinion about this approach? Should I refrain from splitting the initialization process? The question is not only limited to C++ and Python, and answers for other languages are also appreciated. | Though sometimes it is problematical, there are many advantages to initializing everything in the constructor: If there is going yo be an error, it happens as quickly as possible and is easiest to diagnose. For example, if null is an invalid argument value, test and fail in the constructor. The object is always in a valid state. A coworker can't make a mistake and forget to call initMyClass1() because it isn't there . "The cheapest, fastest, and most reliable components are those that aren't there." If it makes sense, the object can be made immutable which has lots of advantages. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/336713",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/254678/"
]
} |
337,004 | I'm not sure what to do with the following: We take data from an external tool within our own tool. This data is written in Dutch. We are writing our Java code in English. Should we then translate this Dutch to English or keep it Dutch? For example, we have 2 departments: Bouw (Construction in English) & Onderhoud (Maintenance in English). Would it then be logical to create: public enum Department { BOUW, ONDERHOUD } or: public enum Department { CONSTRUCTION, MAINTENANCE } or even: public enum Afdeling { BOUW, ONDERHOUD } (afdeling is Department in Dutch) | English is a lingua franca/lowest common denominator for a reason. Even if the reason is conceptually as weak as "Everybody does it", that's still a rather important reason. Going against common practice means that you have to understand Dutch to make sense of the data structures in yor software. There's nothing wrong with Dutch, but the probability that any given engineer who'll have to interact with the code base speaks it is still lower than that for English. Therefore, unless you're a Dutch-only shop, and don't plan to expand internationally ever , it's almost always a good idea to keep your codebase monolingual, and use the most popular coding language. Note: This advice applies to program code only. User data should definitely not be translated, but processed "as is". Even if you have a customer "Goldstein", clearly you should not store their name as "golden stone". The trouble is that there is a continuum of terms between "user-supplied, don't touch" and "code fragment, use English at all times". Customer names are very near the former end of the spectrum, Java variables near the latter end. Constants for enum values are slightly farther away, particularly if they denote well-known, unique external entities (like your departments). If everyone in your organisation uses the Dutch terms for the departments, you don't plan on confronting anyone with the code base who doesn't, and the set of existing departments changes rarely, then using the accepted names of the department may make more sense for enum constants than for local variables. I still wouldn't do it, though. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/337004",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/255074/"
]
} |
337,081 | When I review database models for RDBMS, I'm usually surprised to find little to no constraints (aside PK/FK). For instance, percentage is often stored in a column of type int (while tinyint would be more appropriate) and there is no CHECK constraint to restrict the value to 0..100 range. Similarly on SE.SE, answers suggesting check constraints often receive comments suggesting that the database is the wrong place for constraints. When I ask about the decision not to implement constraints, team members respond: Either that they don't even know that such features exist in their favorite database. It is understandable from programmers using ORMs only, but much less from DBAs who claim to have 5+ years experience with a given RDBMS. Or that they enforce such constraints at application level, and duplicating those rules in the database is not a good idea, violating SSOT. More recently, I see more and more projects where even foreign keys aren't used. Similarly, I've seen a few comments here on SE.SE which show that the users don't care much about referential integrity, letting the application handle it. When asking teams about the choice not to use FKs, they tell that: It's PITA, for instance when one has to remove an element which is referenced in other tables. NoSQL rocks, and there are no foreign keys there. Therefore, we don't need them in RDBMS. It's not a big deal in terms of performance (the context is usually small intranet web applications working on small data sets, so indeed, even indexes wouldn't matter too much; nobody would care if a performance of a given query passes from 1.5 s. to 20 ms.) When I look at the application itself, I systematically notice two patterns: The application properly sanitizes data and checks it before sending it to the database. For instance, there is no way to store a value 102 as a percentage through the application. The application assumes that all the data which comes from the database is perfectly valid. That is, if 102 comes as a percentage, either something, somewhere will crash, or it will simply be displayed as is to the user, leading to weird situations. While more than 99% of the queries are done by a single application, over time, scripts start to appear—either scripts ran by hand when needed, or cron jobs. Some data operations are also performed by hand on the database itself. Both scripts and manual SQL queries have a high risk of introducing invalid values. And here comes my question: What are the reasons to model relational databases without check constraints and eventually even without foreign keys? For what it's worth, this question and the answers I received (especially the interesting discussion with Thomas Kilian) led me to write an article with my conclusions on the subject of database constraints . | It is important to distinguish between different use cases for databases. The traditional business database is accessed by multiple independent applications and services and perhaps directly by authorized users. It is critical to have a well-thought out schema and constraints at the database level, so a bug or oversight in a single application does not corrupt the database. The database is business-critical which means inconsistent or corrupt data may have disastrous results for the business. The data will live forever while applications come and go. These are the places which may have a dedicated DBA to ensure the consistency and health of the database. But there are also systems where the database is tightly integrated with a single application. Stand-alone applications or web application with a single embedded database. As long as the database is exclusively accessed by a single application, you could consider constraints redundant - as long as the application works correctly. These systems are often developed by programmers with a focus on application code and perhaps not a deep understanding of the relational model. If the application uses an ORM the constraints might be declared at the ORM level in a form more familiar to application programmers. In the low end we have PHP applications using MySQL, and for a long time MySQL did not support basic constraints at all, so you had to rely on the application layer to ensure consistency. When developers from these different backgrounds meet you get a culture clash. Into this mix we get the new wave of distributed "cloud storage" databases. It is very hard to keep a distributed database consistent without losing the performance benefit, so these databases often eschew consistency checks at the database level and basically lets the programmers handle it at the application level. Different application have different consistency requirements, and while Googles search engine prioritize availability over consistency across their servers, I'm willing to bet their payroll system runs on a relational database with lots of constraints. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/337081",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/6605/"
]
} |
337,153 | For example, the following function loops through an array which contains the name and errors of an input field. It does this by checking the name of the validating field and then pushing the error info to the invalid fields array. Is it better to be brief and write this: addInvalidField (field, message) {
const foundField = this.invalidFields.find(value => {
return value.name === field.name
})
const errors = foundField.errors
if (!errors.some(error => error.name === message)) {
errors.push({ name: message, message })
}
}, Or be more specific like this? addInvalidField (validatingField, message) {
const foundField = this.invalidFields.find(invalidField => {
return validatingField.name === invalidField.name
})
if (!foundField.errors.some(foundFieldError => foundFieldError.name === message)) {
fouldField.errors.push({ name: message, message })
}
}, | If brevity can be sacrificed for clarity, it should. But if verbosity can be sacrificed for clarity, even better. addInvalidField (field, message) {
const foundInvalidField = this.invalidFields.find(x => x.name === field.name)
if (!foundInvalidField.errors.some(x => x.name === message)) {
foundInvalidField.errors.push({ name: message, message })
}
}, When a variable only lives as long as one line it can be very short indeed. FoundInvalidField is used in three lines and is the focus of this work. It deserves an explanatory name. As always, context is king. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/337153",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/227145/"
]
} |
337,186 | I'm developing a library intended for public release. It contains various methods for operating on sets of objects - generating, inspecting, partitioning and projecting the sets into new forms. In case it's relevant, it's a C# class library containing LINQ-style extensions on IEnumerable , to be released as a NuGet package. Some of the methods in this library can be given unsatisfiable input parameters. For example, in the combinatoric methods, there is a method to generate all sets of n items that can be constructed from a source set of m items. For example, given the set: 1, 2, 3, 4, 5 and asking for combinations of 2 would produce: 1, 2 1, 3 1, 4 etc... 5, 3 5, 4 Now, it's obviously possible to ask for something that can't be done, like giving it a set of 3 items and then asking for combinations of 4 items while setting the option that says it can only use each item once. In this scenario, each parameter is individually valid: The source collection is not null, and does contain items The requested size of combinations is a positive nonzero integer The requested mode (use each item only once) is a valid choice However, the state of the parameters when taken together causes problems. In this scenario, would you expect the method to throw an exception (eg. InvalidOperationException ), or return an empty collection? Either seems valid to me: You can't produce combinations of n items from a set of m items where n > m if you're only allowed to use each item once, so this operation can be deemed impossible, hence InvalidOperationException . The set of combinations of size n that can be produced from m items when n > m is an empty set; no combinations can be produced. The argument for an empty set My first concern is that an exception prevents idiomatic LINQ-style chaining of methods when you're dealing with datasets that may have unknown size. In other words, you might want to do something like this: var result = someInputSet
.CombinationsOf(4, CombinationsGenerationMode.Distinct)
.Select(combo => /* do some operation to a combination */)
.ToList(); If your input set is of variable size, this code's behaviour is unpredictable. If .CombinationsOf() throws an exception when someInputSet has fewer than 4 elements, then this code will sometimes fail at runtime without some pre-checking. In the above example this checking is trivial, but if you're calling it halfway down a longer chain of LINQ then this might get tedious. If it returns an empty set, then result will be empty, which you may be perfectly happy with. The argument for an exception My second concern is that returning an empty set might hide problems - if you're calling this method halfway down a chain of LINQ and it quietly returns an empty set, then you may run into issues some steps later, or find yourself with an empty result set, and it may not be obvious how that happened given that you definitely had something in the input set. What would you expect, and what's your argument for it? | Return an Empty Set I would expect an empty set because: There are 0 combinations of 4 numbers from the set of 3 when i can only use each number once | {
"source": [
"https://softwareengineering.stackexchange.com/questions/337186",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/81568/"
]
} |
337,295 | Some languages claim to have "no runtime errors" as a clear advantage over other languages that has them. I am confused on the matter. Runtime error is just a tool, as far as I know, and when used well: you can communicate "dirty" states (throwing at unexpected data) by adding stack you can point to the chain of error you can distinguish between clutter (e.g. returning an empty value on invalid input) and unsafe usage that needs attention of a developer (e.g. throwing exception on invalid input) you can add detail to your error with the exception message providing further helpful details helping debugging efforts (theoretically) On the other hand I find really hard to debug a software that "swallows" error. E.g. try {
myFailingCode();
} catch {
// no logs, no crashes, just a dirty state
} So the question is: what is the strong, theoretical advantage of having "no runtime errors"? Example https://guide.elm-lang.org/ No runtime errors in practice. No null. No undefined is not a function. | Exceptions have extremely limiting semantics. They must be handled exactly where they are thrown, or in the direct call stack upwards, and there is no indication to the programmer at compile time if you forget to do so. Contrast this with Elm where errors are encoded as Results or Maybes , which are both values . That means you get a compiler error if you don't handle the error. You can store them in a variable or even a collection to defer their handling to a convenient time. You can create a function to handle the errors in an application-specific manner instead of repeating very similar try-catch blocks all over the place. You can chain them into a computation that succeeds only if all its parts succeeds, and they don't have to be crammed into one try block. You are not limited by the built-in syntax. This is nothing like "swallowing exceptions." It's making error conditions explicit in the type system and providing much more flexible alternative semantics to handle them. Consider the following example. You can paste this into http://elm-lang.org/try if you would like to see it in action. import Html exposing (Html, Attribute, beginnerProgram, text, div, input)
import Html.Attributes exposing (..)
import Html.Events exposing (onInput)
import String
main =
beginnerProgram { model = "", view = view, update = update }
-- UPDATE
type Msg = NewContent String
update (NewContent content) oldContent =
content
getDefault = Result.withDefault "Please enter an integer"
double = Result.map (\x -> x*2)
calculate = String.toInt >> double >> Result.map toString >> getDefault
-- VIEW
view content =
div []
[ input [ placeholder "Number to double", onInput NewContent, myStyle ] []
, div [ myStyle ] [ text (calculate content) ]
]
myStyle =
style
[ ("width", "100%")
, ("height", "40px")
, ("padding", "10px 0")
, ("font-size", "2em")
, ("text-align", "center")
] Note the String.toInt in the calculate function has the possibility of failing. In Java, this has the potential to throw a runtime exception. As it reads user input, it has a fairly good chance of it. Elm instead forces me to deal with it by returning a Result , but notice I don't have to deal with it right away. I can double the input and convert it to a string, then check for bad input in the getDefault function. This place is much better suited for the check than either the point where the error occurred or upwards in the call stack. The way the compiler forces our hand is also much finer-grained than Java's checked exceptions. You need to use a very specific function like Result.withDefault to extract the value you want. While technically you could abuse that sort of mechanism, there isn't much point. Since you can defer the decision until you know a good default/error message to put, there's no reason not to use it. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/337295",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/102537/"
]
} |
337,345 | I became the scrum master of a newly established team, that is responsible for creating a software AND maintaining other deployed application. So basically each team member has development & operations tasks. I have been observing how they work the past couple of weeks and I noticed that team is having troubles in coordinating these tasks. when a developer is concentrating on coding he get interrupted to fix an issue raised in production, and it's hard for him to focus again on previous task. I have tried allocating % of developer time for operations work but apparently this isn't resolving the issue. I'm interested in hearing from scrum masters who came across this situation before, how did you manage it and what are your recommendations? | This problem is as old as scrum. There is a solution, but you won't like it. Put new tasks on the backlog. Don't interrupt developers. Wait for the next sprint. Putting your devs in more than one scrum, having two separate backlogs or assigning only a percentage of their time to the sprint all work against what scrum is trying to achieve, i.e. a consistent flow of completed tasks. If you try those type of things you basically go back to 'chaos' or 'JFDI' development methodologies with all its attendant problems e.g. Developer has ten tasks on the go at any one time. No-one knows what they are working on or when it will be finished. Unknown time to finish project, because it depends on what other projects are happening at the same time. No consistent view of project priority. Other managers divert developers to their pet projects. Of course the usual response to this advice is "But I can't do that! The business needs those production bugs to be fixed ASAP!" But that is not really true. If you really have that many actual bugs that are affecting the business to this extent then you need to get professional and improve your testing. Just work on bugs and automated tests until you have fixed them all. Hire a QA team and test the hell out of all new releases. What is more likely though is one of the following: The bugs are operational problems, running out of disk space, no DR, no Backups, no failover etc. Get an OPS team. The bugs are users not understanding how the system should work "This happened! is it a bug?". Get a helpdesk and train your users, write documentation. Fear of rollback. You launched a new feature and it broke something, don't try to rush out a fix. Roll back to the previous version and put the bugs on the backlog. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/337345",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/254638/"
]
} |
337,362 | I am using SonarLint for Eclipse since recently, and it helped me a lot. However, it raised to me a question about cyclomatic complexity. SonarLint considers as acceptable a C.C of 10, and there are some cases where I am beyond it, about 5 or 6 units. Those parts are related to mappers where the values relies on different variables, for example: Field A relies on String sA; Field B relies on String sB; Field C relies on String sC; etc ... I have no other choice that putting an if for each field. This is not my choice (fortunately) but an already existing and complex system that I cannot change by myself. The core of my question is: why is it so important to not have a too high C.C in a single method ? If you move some of your conditions in one or more sub-methods to reduce the complexity, it does not reduce the cost of your overall function, it is just moving the problem elsewhere, I guess ? (Sorry for small mistakes, if any). EDIT My question does not refer to global cyclomatic complexity, but only to single method complexity and method splitting (I have a rough time explaining what exactly I mean, sorry). I am asking why does it is allowable to split your conditions into smaller methods if it still belongs to a 'super method', which will just execute every sub-method, thus adding complexity to the algorithm. The second link however ( about the anti-pattern ) is of great help. | The core thing here: "brain capacity". You see, one of the main functions of code is ... to be read .
And code can be easy to read and understand; or hard. And having a high CC simply implies a lot of "levels" within one method. And that implies: you, as a human reader will have a hard time understanding that method. When you read source code, your brain automatically tries to put things into perspective: in other words - it tries to create some form of "context". And when you have a small method (with a good name) that only consists of a few lines, and very low CC; then your brain can easily accept this "block". You read it, you understand it; DONE. On the other hand, if your code has high CC, your brain will spend many many "cycles" more to deduct what is going on. Another way of saying that: you should always lean towards preferring a complex network of simple things over a simple network of complex things. Because your brain is better at understanding small things. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/337362",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/224662/"
]
} |
337,371 | I'm helping to manage an external team who are starting to develop new versions of some existing products. Historically, this team has always used a model of a single project in a single solution for about 30 modules in Visual Studio which go together to produce a deployable build. This is having a detrimental impact on build reliability and quality, because they don't always send us the most up-to-date source code. We're trying to press them to unify all referenced code in to a single solution, but we're getting some resistance - specifically they keep talking about interdependence between modules (read "projects" in Visual Studio) being increased if everything is placed in a single solution file. None of the code in the separate solutions is used elsewhere. I insist this is nonsense and good development patterns will avoid any such problem. The team in question also perform bugfix and new feature development on an existing product, the experience of which has been fraught to say the least and suffers from exactly the same problem of splitting over multiple solutions. We've been refused access to their source control ( TFS ), and the approach we're taking to unify the codebase is to try and at least reduce the number of missing updates and more than occasional regressions (yes, fixed bugs are getting re-introduced in to the product) by saying "send us a ZIP of the entire solution folder so we can unzip, open it in Visual Studio, and press F5 for testing". In terms of general structure and quality, the code is pretty poor and hard to support. This experience is why I'm intent on getting the working processes right as early in the development cycle as possible. Is there something I'm missing? Is there ever a good reason to keep all that code separated out? For my money it would have to be so compelling a reason that it would be common knowledge, but I'm more than willing to concede that I don't know everything. | You don't need to tell them how to structure their projects. Instead, make it a hard requirement that you can build the system from source by running one single script, without getting any errors. If that scripts runs Visual Studio or Msbuild or some other tools, and if those are called once, 50 or 100 times should not matter. That way, you get the same "test for completeness" of the code as you would get by putting everything into a single solution. Of course, that script does not tell you if the team has really checked out the latest version from their source control, but having the whole code in one solution would not check that, either. As a reply to "interdependence between modules being increased if everything is placed in a single solution file" - this is proveable nonsense, since adding projects to a solution does not change any of the dependencies between projects, the dependencies are a result from one project file referencing another, which is completely independent from which solution references which project file. No one stops that team to have both - a single solution which references all projects, and also individual solutions each one referencing only one project. Nevertheless I would suggest to add a build script. This has benefits even when there is only one solution file. For example, it allows one to run a VS build with a preferred configuration, let you copy the final files for deployment (and nothing more) to a "deploy" folder, and may run some other tools and steps to make the build complete. See also F5 is not a build process! , and The Joel Test . | {
"source": [
"https://softwareengineering.stackexchange.com/questions/337371",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/255567/"
]
} |
337,413 | One of the OOP principles I came across is: -Encapsulate what varies. I understand what the literal meaning of the phrase is i.e. hide what varies. However, I don't know how exactly would it contribute to a better design. Can someone explain it using a good example? | You can write code that looks like this: if (pet.type() == dog) {
pet.bark();
} else if (pet.type() == cat) {
pet.meow();
} else if (pet.type() == duck) {
pet.quack()
} or you can write code that looks like this: pet.speak(); If what varies is encapsulated then you don't have to worry about it. You just worry about what you need and whatever you're using figures out how to do what you really need based on what varied. Encapsulate what varies and you don't have to spread code around that cares about what varied. You just set pet to be a certain type that knows how to speak as that type and after that you can forget which type and just treat it like a pet. You don't have to ask which type. You might think type is encapsulated because a getter is required to access it. I don't. Getter's don't really encapsulate. They just tattle when someone breaks your encapsulation. They're a nice decorator like aspect oriented hook that is most often used as debugging code. No matter how you slice it, you're still exposing type. You might look at this example and think I'm conflating polymorphism and encapsulation. I'm not. I'm conflating "what varies" and "details". The fact that your pet is a dog is a detail. One that might vary for you. One that might not. But certainly one that might vary from person to person. Unless we believe this software will only ever be used by dog lovers it's smart to treat dog as a detail and encapsulate it. That way some parts of the system are blissfully unaware of dog and wont be impacted when we merge with "parrots are us". Decouple, separate, and hide details from the rest of the code. Don't let knowledge of details spread through your system and you'll be following "encapsulate what varies" just fine. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/337413",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/251737/"
]
} |
337,472 | I was introduced to genetic algorithms recently by this MSDN article , in which he calls them combinatorial evolution, but it seems to be the same thing, and am struggling to understand how combining two potential solutions will always produce a new solution that is at least as good as its parents. Why is this so? Surely combining might produce something worse. As far as I understand it, the algorithm is based on the concept that when a male and female of a species produce offspring, those offspring will have characteristics of both parents. Some combinations will be better, some worse and some just as good. The ones that are better (for whatever defintion of "better" is appropriate) stand more chance of surviving and producing offpsring that have the improved characteristics. However, there will be combinations that are weaker. Why isn't this an issue with GA? | A genetic algorithm tries to improve at each generation by culling the population. Every member is evaluated according to a fitness function, and only a high-scoring portion of them is allowed to reproduce. You're right, though: there is no guarantee that the next generation will improve on its predecessor's score. Consider Dawkins' weasel program : "evolving" the string "Methinks it is like a weasel" . Starting from a population of random strings, the fitness function evaluates the closest textual match, which is perturbed to produce the next generation. With a simple crossover reproduction, two high-scoring strings that are combined could very easily produce lower-scoring offspring. Even "asexual" random mutation of a single high-fitness string could lower the child's fitness. It's worth noting, I think, that this is not necessarily a flaw. With this kind of search, there is the idea of local maxima . A member of the population might represent a solution that's not the optimal result, but is the best that can be achieved without getting worse on the way. Imagine that the fitness function for the weasel program doesn't only find the edit distance, but has some notion of "word", and tests whether the last word of the string is the name of an animal. Any animal name scores well, but "weasel" gets a big bonus. Now what happens if "Methinks it is like a walrus" is evolved? It scores well. Not as well as the ultimate target string, but better than "Methinks it is like a walrut" or other close variations that could be reached by a single step of mutation. The walrus string is a local maximum, and the search can get stuck there unless the program allows the next generation's score to be worse. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/337472",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/123358/"
]
} |
337,639 | Which caveats should I be aware while localizing numbers in my front-end application ? Example: In Brazilian Portuguese (pt-BR) we split thousands with dots and decimals with commas. In US English (en-US) that's the contrary. In pt-BR we present the digits separated by the thousands, the same as en-US. But reading about Indian English (en-IN) today I came across this gem: The Indian numbering system is preferred for digit grouping. When written in words, or when spoken, numbers less than 100,000/100 000 are expressed just as they are in Standard English. Numbers including and beyond 100,000 / 100 000 are expressed in a subset of the Indian numbering system. https://en.wikipedia.org/wiki/Indian_English#Numbering_system Which means: 1000000 units in pt-BR are formatted 1.000.000
1000000 units in en-US are formatted 1,000,000
1000000 units in en-IN are formatted 10,00,000 Besides commas and dots and other specific separators, it seems that masking is also a valid concern. Which other caveats should I be aware while localizing numbers in my front-end application? Specially if I'm showing numbers to non-latin character sets? | Most programming languages and frameworks already have a sensible, working mechanism that you can use for this. For example, the C# ecosystem has the System.Globalization namespace, which allows you to specify the Culture you want: Console.WriteLine(myMoneyValue.ToString("C", "en-US")); This is not something that you want to re-invent. Use the internationalization features provided by your favorite language or framework. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/337639",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/9506/"
]
} |
337,705 | Studying beginners course on hardware/software interface and operating systems, often come up the topic of if it would be better to replace some hardware parts with software and vice-versa. I can't make the connection. | I think the fundamental connection that other answers are missing is this: Given a general-purpose computer (e.g. a CPU), one can program it to perform pretty much any computation that we have defined. However, specialized hardware may perform better, or may not provide any value. (this answer is focused on desktop processing and uses examples from that domain) Replacing software with hardware If you are old enough to remember PC gaming in the mid-to-late 1990s, you probably remember FPS games like Quake . It started out being "software rendered," meaning the CPU performed the calculations necessary to render the graphics. Meanwhile, the CPU also had to perform input processing, audio processing, AI processing, etc. It was very taxing on the CPU resources. In addition, graphics processing is not well-suited to a mainstream CPU (then or now). It tends to be a very highly parallel task, requiring many more cores than even a modern high-end CPU (8). We moved graphics processing from software to hardware: enter the 3dfx Voodoo and Nvidia TNT (now GeForce ). These were specialized graphics cards that offloaded processing from the CPU to the GPU. Not only did this spread the workload, providing more computing resources to do the same amount of work, the graphics cards were specialized hardware that could render 3D graphics much faster and with more features than the CPU could. Fast forward to the modern era, and non-CPU graphics are required on the desktop. Even the operating system cannot function without a GPU. It is so important that CPUs actually integrate GPUs now. 1 Replacing hardware with software Back when DVD was brand-new, you could install a DVD drive in your desktop computer. However, the CPUs of the day were not powerful enough to decode the DVD video and audio streams without stuttering. At first, a specialized PCI board was required to perform the decoding. This was specialized hardware that was build specifically to decode the DVD format and nothing else. Much like with 3D graphics, it not only provided more computing resources but was custom-built for the task, making DVD playback smooth. As CPUs grew much more powerful, it became feasible to decode DVDs "in software," meaning "on a general-purpose computer." Even with a less-efficient processor, it had enough raw speed and pipeline optimizations to make DVD playback work to users' expectations. We now have CPUs hundreds or even thousands of times as powerful 2 as we had when DVDs were introduced. When Blu-ray came along, we never needed specialized hardware, because general-purpose hardware was more than powerful enough to handle the task. Doing both Modern Intel CPUs have specialized instructions for H.264 encoding and decoding. This is part of a trend where general-purpose CPUs are gaining specialized functions, all in the same chip. We do not need a separate PCI Express board to decode H.264 efficiently as with DVDs early on, because CPUs contain similar circuitry. 1 GPU refers to a processor specifically designed to perform graphical computations. Older 2D graphics cards were not GPUs: they were simply framebuffers with DACs to talk to the monitor. The difference is GPUs contain specialized processors that excel at certain types of calculations, and as time went on, are now actually programmable themselves (shaders). Graphics hardware has always contained the specialized circuitry necessary to convert the data in a framebuffer into a format that can be output across a cable (VGA, DVI, HDMI, DisplayPort) and understood by a monitor. That is irrelevant to the discussion of offloading the computations to specialized hardware. 2 DVD-Video was released in 1997, at a time when the Pentium 2 was also newly-released. This was a time when CPUs were rapidly increasing in power: one could consider a new P2 computer with a DVD decoder, or installing one in a slightly older P1. Compare that to a modern generation 6 Core i7 using Wikipedia's list of MIPS , and a modern CPU is anywhere between 590 and 1,690 times faster. This is due in part to clock speed, but also the move to multiple cores as being standard as well as modern CPUs doing a lot more work per core per clock tick. Also relevant is that as technology advances, Intel (who dominates the desktop and x86 server market) adds specialized instructions to help speed up operations that desktop users want to do (e.g. video decoding). | {
"source": [
"https://softwareengineering.stackexchange.com/questions/337705",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/256097/"
]
} |
337,749 | The first 90 percent of the code accounts for the first 90 percent of
the development time. The remaining 10 percent of the code accounts
for the other 90 percent of the development time. — Tom Cargill, Bell Labs What does that exactly mean in practice? That programers do substantial amount of work and that they are giving 180% out of themselves or? | Imagine it like this: When you start working on software you can write huge amounts of code in relatively short time. This new code can add huge amount of new functionality. The problem is that, often, that functionality is far from "done", there might be bugs, small changes (small in business small) and so on. So the software might feel like it is almost done (90% done), because it supports majority of the use cases. But the software still needs work. The point of this rule is that despite the software feeling like it is almost done, the amount of work to bringing that software into properly working state is as big as getting to that "almost done" state. That is because bug fixing is often time-consuming but doesn't produce lots of code. The problem is that most developers estimate getting the software into "almost done" state, because that is relatively simple compared to actually estimating total effort the software will take. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/337749",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/198652/"
]
} |
337,912 | I sometimes stumble upon code similar to the following example (what this function does exactly is out of the scope of this question): function doSomething(value) {
if (check1(value)) {
return -1;
}
else if (check2(value)) {
return value;
}
else {
return false;
}
} As you can see, if , else if and else statements are used in conjunction with the return statement. This seems fairly intuitive to a casual observer, but I think it would be more elegant (from the perspective of a software developer) to drop the else -s and simplify the code like this: function doSomething(value) {
if (check1(value)) {
return -1;
}
if (check2(value)) {
return value;
}
return false;
} This makes sense, as everything that follows a return statement (in the same scope) will never be executed, making the code above semantically equal to the first example. Which of the above fits the good coding practices more? Are there any drawbacks to either method with regard to code readability? Edit: A duplicate suggestion has been made with this question provided as reference. I believe my question touches on a different topic, as I am not asking about avoiding duplicate statements as presented in the other question. Both questions seek to reduce repetitions, albeit in slightly different ways. | I like the one without else and here's why: function doSomething(value) {
//if (check1(value)) {
// return -1;
//}
if (check2(value)) {
return value;
}
return false;
} Because doing that didn't break anything it wasn't supposed to. Hate interdependence in all it's forms (including naming a function check2() ). Isolate all that can be isolated. Sometimes you need else but I don't see that here. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/337912",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/156274/"
]
} |
337,962 | If you imagine a company like Amazon (or any other large e-commerce web application), that is operating an online store at massive scale and only has limited quantity of physical items in its warehouses, how can they optimize this such that there is no single bottleneck? Of course, they must have a number of databases with replication, and many servers that are handling the load independently. However, if multiple users are being served by separate servers and both try to add the same item to their cart, for which there is only one remaining, there must be some "source of truth" for the quantity left for that item. Wouldn't this mean that at the very least, all users accessing product info for a single item must be querying the same database in serial? I would like to understand how you can operate a store that large using distributed computing and not create a huge bottleneck on a single DB containing inventory information. | However, if multiple users are being served by separate servers and both try to add the same item to their cart, for which there is only one remaining, there must be some "source of truth" for the quantity left for that item. Not really. This is not a problem that requires a 100% perfect technical solution, because both error cases have a business solution that is not very expensive: If you incorrectly tell a user an item is sold out, you lose a sale. If you sell millions of items every day and this happens maybe once or twice a day, it gets lost in the noise. If you accept an order and while processing it find that you've run out of the item, you just tell the customer so and give them the choice of waiting until you can restock, or cancelling the order. You have one slightly annoyed customer. Again not a huge problem when 99.99% of orders work fine. In fact, I recently experienced the second case myself, so it's not hypothetical: that is what happens and how Amazon handles it. It's a concept that applies often when you have problem that is theoretically very hard to solve (be it in terms of performance, optimization, or whatever): you can often live with a solution that works really well for most cases and accept that it sometimes fails, as long as you can detect and handle the failures when they occur. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/337962",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/85579/"
]
} |
337,985 | my question would be can age be considered a composite attribute? Because name is a composite attribute and it can be divided into first name, middle name and last name. And therefore can age be a composite attribute since you can divide it into years, months and then days? | Can age be a composite attribute? No. age is a function of birthdate and now. age = now - birthdate So, what about birthdate? Can it be a composite attribute? Yes, it can, but it only makes sense to store dates as a composite in data warehousing situations. Often, when warehousing data, you would store year, month, and day as separate things to make it easier to write queries such as How many people were born in March? Or Of all the people born in 1982, how many have blue eyes.
How does that compare to April 1992? You'd also likely have a table that maps dates to quarters, so you could ask things like: How do birth rates compare between Q1 and Q2 over the last decade? These are contrived examples, but hopefully illustrate the point. I'd recommend doing some research on "star schema" databases and "slowly changing metrics". | {
"source": [
"https://softwareengineering.stackexchange.com/questions/337985",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/256574/"
]
} |
338,028 | I've been working at my job for about a year. I primarily do work in our GUI interface which uses methods from a C backend, but I generally don't have to deal with them except for return values. Our GUI is structured pretty reasonably, given our limitations. I've been tasked with adding a function to the command line portion of the program. Most of these functions are like 300 lines long and difficult to use. I'm trying to gather pieces of them to get at specific alarm information, and I'm having trouble keeping organized. I know that I'm making my testing more complicated by doing it in a single long function. Should I just keep everything in a huge function as per the style of the existing functions, or should I encapsulate the alarms in their own functions? I'm not sure if its appropriate to go against the current coding conventions or whether I should just bite the bullet and make the code a little more confusing for myself to write. In summary, I'm comparing showAlarms(){
// tons of code
} against showAlarms(){
alarm1();
alarm2();
return;
}
alarm1(){
...
printf(...);
return;
} EDIT: Thanks for the advice everyone, I decided that I'm going to design my code factored, and then ask what they want, and if they want it all in one I can just cut from my factored code and turn it back into 1 big function. This should allow me to write it and test it more easily even if they want it all the code in a single definition. UPDATE: They ended up being happy with the factored code and more than one person has thanked me for setting this precedent. | This is really between you and your team mates. Nobody else can tell you the right answer. However, if I may dare read between the lines, the fact that you call this style "bad" gives some information that suggests it's better to take it slow. Very few coding styles are actually "bad." There are ones I would not use, myself, but they always have a rhyme or reason to them. This suggests, to me, that there's more to the story than you have seen so far. Asking around would be a very wise call. Someone may know something you don't. I ran into this, personally, on my first foray into real-time mission-critical coding. I saw code like this: lockMutex(&mutex);
int rval;
if (...)
{
...
rval = foo();
}
else
{
...
rval = bar();
}
unlockMutex(&mutex);
return rval; Being the bright and shiny OO C++ developer I was, I immediately called them out on the bug risks they had by manually locking and unlocking mutexes, rather than using RAII . I insisted that this was better: MutexLocker lock(mutex);
if (...)
{
...
return foo();
}
else
{
...
return bar();
} Much simpler and it's safer, right?! Why require developers to remember to unlock their mutexes on all control flow path when the compiler can do it for you! Well, what I found out later was that there was a procedural reason for this. We had to confirm that, yes indeed, the software worked correctly, and there was a finite list of tools we were permitted to use. My approach may have been better in a different environment, but in the environment I was working in, my approach would easily multiply the amount of work involved in verifying the algorithm ten fold because I just brought a C++ concept of RAII into a section of code that was being held to standards that were really more amenable to C-style thinking. So what looked like bad, downright dangerous, coding style to me was actually well thought out and my "good" solution was actually the dangerous one that was going to cause problems down the road. So ask around. There's surely a senior developer who can work with you to understand why they do it this way. Or, there's a senior developer who can help you understand the costs and benefits of a refactor in this part of the code. Either way, ask around! | {
"source": [
"https://softwareengineering.stackexchange.com/questions/338028",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/206588/"
]
} |
338,143 | I have read for three days about the Model-View-Controller (MVC) and Model-View-Presenter (MVP) patterns. And there is one question that bothers me very much. Why did software designers invent MVP, when there already was an MVC? What problems did they face, that MVC did not solve (or solved badly), but MVP can solve? Which problems is MVP intended to solve? I have read a lot of articles about the history and explanation of MVP, or about differences between MVC and MVP, but none had a clear answer to my questions. In one of the articles that I read, it was said: Now onto Model View Presenter, which was a response to the inadequacies of the MVC pattern when applied to modern component based graphical user interfaces. In modern GUI systems, GUI components themselves handle user input such as mouse movements and clicks, rather than some central controller. So, I can't understand, but can it actually be in another way, such that GUI components do not handle user input by themselves? And what exactly does "handle by themselves" mean? | MVC is conceptually elegant: user input is handled by the controller the controller updates the model the model updates the view/user interface +---+
+----| V |<----+
user | +---+ | updates
input | |
v |
+---+ +---+
| C |--------->| M |
+---+ updates +---+ However: The data- and event-flow in MVC is circular. And the view will often contain significant logic (like event handlers for user actions). Together, these properties makes the system difficult to test and hard to maintain. The MVP architecture replaces the controller with a presenter, which mediates between the view and the model. This linearizes the system: user input updates
+---+ -----------> +---+ --------> +---+
| V | | P | | M |
+---+ <----------- +---+ <-------- +---+
updates updates This has the following advantages: Logic (like event handlers and user interface state) can be moved from the view to the presenter. The user interface can be unit tested in terms of the presenter, since it describes the user interface state. Inside the unit test, we replace the view with a test driver that makes calls to the presenter. Since the user interface is isolated from the application logic, both can be developed independently. But there are also some drawbacks to this approach: It requires more effort. The presenter can easily mutate into an unmaintainable “god class”. The application doesn't have a single MVP axis, but multiple axes: one for each screen/window/panel in the user interface. This may either simplify your architecture or horribly overcomplicate it. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/338143",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/256828/"
]
} |
338,195 | This article claims that a data class is a "code smell". The reason: It's a normal thing when a newly created class contains only a few
public fields (and maybe even a handful of getters/setters). But the
true power of objects is that they can contain behavior types or
operations on their data. Why is it wrong for an object to contain only data? If the core responsibility of the class is to represent data, wouldn't add methods that operate on the data break the Single Responsibility Principle ? | There is absolutely nothing wrong with having pure data objects. The author of the piece quite frankly doesn't know what he's talking about. Such thinking stems from an old, failed, idea that "true OO" is the best way to program and that "true OO" is all about "rich data models" where one mixes data and functionality. Reality has shown us that actually the opposite is true, especially in this world of multi-threaded solutions. Pure functions, combined with immutable data-objects, is a demonstrably better way to code. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/338195",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/200887/"
]
} |
338,219 | The code is difficult to follow but it appears to be (mostly) working well, at least with superficial testing. There might be small bugs here and there but it's very hard to tell by reading the code if they are symptomatic of deeper issues or simple fixes. Verifying overall correctness manually via code review however is very difficult and time-consuming, if it is even possible at all. What is the best course of action in this situation? Insist on a do-over? Partial do-over? Re-factoring first? Fix the bugs only and accept the technical debt ? Do a risk assessment on those options and then decide? Something else? | If it cannot be reviewed, it cannot pass review. You have to understand that code review isn't for finding bugs. That's what QA is for. Code review is to ensure that future maintenance of the code is possible. If you can't even follow the code now, how can you in six months when you're assigned to do feature enhancements and/or bug fixes? Finding bugs right now is just a side benefit. If it's too complex, it's violating a ton of SOLID principles . Refactor, refactor, refactor. Break it up into properly named functions which do a lot less, simpler. You can clean it up and your test cases will make sure that it continues to work right. You do have test cases, right? If not, you should start adding them. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/338219",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/191975/"
]
} |
338,337 | I'm building a RESTful API that uses JWT tokens for user authentication (issued by a login endpoint and sent in all headers afterwards), and the tokens need to be refreshed after a fixed amount of time (invoking a renew endpoint, which returns a renewed token). It's possible that an user's API session becomes invalid before the token expires, hence all of my endpoints start by checking that: 1) the token is still valid and 2) the user's session is still valid. There is no way to directly invalidate the token, because the clients store it locally. Therefore all my endpoints have to signal my clients of two possible conditions: 1) that it's time to renew the token or 2) that the session has become invalid, and they are no longer allowed to access the system. I can think of two alternatives for my endpoints to signal their clients when one of the two conditions occurs (assume that the clients can be adapted to either option): Return an http 401 code (unauthorized) if the session has become invalid or return a 412 code (precondition failed) when the token has expired and it's time to call the renew endpoint, which will return a 200 (ok) code. Return 401 for signaling that either the session is invalid or the token has expired. In this case the client will immediately call the renew endpoint, if it returns 200 then the token is refreshed, but if renew also returns 401 then it means that the client is out of the system. Which of the two above alternatives would you recommend? Which one would be more standard, simpler to understand, and/or more RESTful? Or would you recommend a different approach altogether? Do you see any obvious problems or security risks with either option? Extra points if your answer includes external references that support your opinion. UPDATE Guys, please focus on the real question - which of the two http code alternatives for signaling a renewal/session invalidation is the best? Don't mind the fact that my system uses JWT and server-side sessions, that's a peculiarity of my API for very specific business rules, and not the part I'm seeking help for ;) | This sounds like a case of authentication versus authorization . JWTs are cryptographically signed claims about the originator of a request. A JWT might contain claims like "This request is for user X" and "User X has an administrator roles". Obtaining and providing this proof through passwords, signatures, and TLS is the domain of authentication - proving you are who you say you are. What those claims mean to your server - what specific users and roles are allowed to do - is the problem of authorization . The difference between the two can be described with two scenarios. Suppose Bob wants to enter the restricted storage section of his company's warehouse, but first he must deal with a guard named Jim. Scenario A - Authentication Bob: "Hello Jim, I'd like to enter restricted storage." Jim: "Have you got your badge?" Bob: "Nope, forgot it." Jim: "Sorry pal, no entry without a badge." Scenario B - Authorization Bob: "Hello Jim, I'd like to enter restricted storage. Here's my badge." Jim: "Hey Bob, you need level 2 clearance to enter here. Sorry." JWT expiration times are an authentication device used to prevent others from stealing them. If all your JWTs have five minute expiration times, it's not nearly as big a deal if they're stolen because they'll quickly become useless. However, the "session expiration" rule you discuss sounds like an authorization problem. Some change in state means that user X is no longer allowed to do something they used to be able to do. For instance, user Bob might have been fired - it doesn't matter that his badge says he's Bob anymore, because simply being Bob no longer gives him any authority with the company. These two cases have distinct HTTP response codes: 401 Unauthorized and 403 Forbidden . The unfortunately named 401 code is for authentication issues such as missing, expired, or revoked credentials. 403 is for authorization, where the server knows exactly who you are but you're just not allowed to do the thing you're attempting to do. In the case of a user's account being deleted, attempting to do something with a JWT at an endpoint would result in a 403 Forbidden response. However, if the JWT is expired, the correct result would be 401 Unauthorized. A common JWT pattern is to have "long lived" and "short lived" tokens. Long lived tokens are stored on the client like short lived tokens, but they're limited in scope and only used with your authorization system to obtain short lived tokens. Long lived tokens, as the name implies, have very long expiration periods - you can use them to request new tokens for days or weeks on end. Short lived tokens are the tokens you're describing, used with very short expiration times to interact with your system. Long lived tokens are useful to implement Remember Me functionality, so you don't need to supply your password every five minutes to get a new short lived token. The "session invalidation" problem you're describing sounds similar to attempting to invalidate a long-lived JWT, as short lived ones are rarely stored server-side while long-lived ones are tracked in case they need to be revoked. In such a system, attempting to acquire credentials with a revoked long-lived token would result in 401 Unauthorized, because the user might technically be able to acquire credentials but the token they're using isn't suitable for the task. Then when the user attempts to acquire a new long lived token using their username and password, the system could respond with 403 Forbidden if they're kicked out of the system. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/338337",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/37922/"
]
} |
338,436 | My office is trying to figure out how we handle branch splits and merges, and we've run into a big problem. Our issue is with long-term sidebranches -- the kind where you've got a few people working a sidebranch that splits from master, we develop for a few months, and when we reach a milestone we sync the two up. Now, IMHO, the natural way to handle this is, squash the sidebranch into a single commit. master keeps progressing forward; as it should - we're not retroactively dumping months of parallel development into master 's history. And if anybody needs better resolution for the sidebranch's history, well, of course it's all still there -- it's just not in master , it's in the sidebranch. Here's the problem: I work exclusively with the command line, but the rest of my team uses GUIS. And I've discovered the GUIS don't have a reasonable option to display history from other branches. So if you reach a squash commit, saying "this development squashed from branch XYZ ", it's a huge pain to go see what's in XYZ . On SourceTree, as far as I'm able to find, it's a huge headache: If you're on master , and you want to see the history from master+devFeature , you either need to check master+devFeature out (touching every single file that's different), or else scroll through a log displaying ALL your repository's branches in parallel until you find the right place. And good luck figuring out where you are there. My teammates, quite rightly, do not want to have development history so inaccessible. So they want these big, long development-sidebranches merged in, always with a merge commit. They don't want any history that isn't immediately accessible from the master branch. I hate that idea; it means an endless, unnavigable tangle of parallel development history. But I'm not seeing what alternative we have. And I'm pretty baffled; this seems to block off most everything I know about good branch management, and it's going to be a constant frustration to me if I can't find a solution. Do we have any option here besides constantly merging sidebranches into master with merge-commits? Or, is there a reason that constantly using merge-commits is not as bad as I fear? | Even though I use Git on the command line – I have to agree with your colleagues. It is not sensible to squash large changes into a single commit. You are losing history that way, not just making it less visible. The point of source control is to track the history of all changes. When did what change why? To that end, every commit contains pointers to parent commits, a diff, and metadata like a commit message. Each commit describes the state of the source code and the complete history of all changes that led up to that state. The garbage collector may delete commits that are not reachable. Actions like rebasing, cherry-picking, or squashing delete or rewrite history. In particular, the resulting commits no longer reference the original commits. Consider this: You squash some commits and note in the commit message that the squashed history is available in original commit abcd123. You delete [1] all branches or tags that include abcd123 since they are merged. You let the garbage collector run. [1]: Some Git servers allow branches to be protected against accidental deletion, but I doubt you want to keep all your feature branches for eternity. Now you can no longer look up that commit – it just doesn't exist. Referencing a branch name in a commit message is even worse, since branch names are local to a repo. What is master+devFeature in your local checkout might be doodlediduh in mine. Branches are just moving labels that point to some commit object. Of all history rewriting techniques, rebasing is the most benign because it duplicates the complete commits with all their history, and just replaces a parent commit. That the master history includes the complete history of all branches that were merged into it is a good thing, because that represents reality. [2] If there was parallel development, that should be visible in the log. [2]: For this reason, I also prefer explicit merge commits over the linearized but ultimately fake history resulting from rebasing. On the command line, git log tries hard to simplify the displayed history and keep all displayed commits relevant. You can tweak history simplification to suit your needs. You might be tempted to write your own git log tool that walks the commit graph, but it is generally impossible to answer “was this commit originally committed on this or that branch?”. The first parent of a merge commit is the previous HEAD , i.e. the commit in the branch that you are merging into. But that assumes that you didn't do a reverse merge from master into the feature branch, then fast-forwarded master to the merge. The best solution to long-term branches I've encountered is to prevent branches that are only merged after a couple of months. Merging is easiest when the changes are recent and small. Ideally, you'll merge at least once per week. Continuous integration (as in Extreme Programming, not as in “let's set up a Jenkins server”), even suggest multiple merges per day, i.e. not to maintain separate feature branches but share a development branch as a team. Merging before a feature is QA'd requires that the feature is hidden behind a feature flag. In return, frequent integration makes it possible to spot potential problems much earlier, and helps to keep a consistent architecture: far reaching changes are possible because these changes are quickly included in all branches. If a change breaks some code, it will only break a couple of days work, not a couple of months. History rewriting can make sense for truly huge projects when there are multiple millions lines of code and hundreds or thousands of active developers. It is questionable why such a large project would have to be a single git repo instead of being divided into separate libraries, but at that scale it is more convenient if the central repo only contains “releases“ of the individual components. E.g. the Linux kernel employs squashing to keep the main history manageable. Some open source projects require patches to be sent via email, instead of a git-level merge. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/338436",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/30198/"
]
} |
338,597 | I make use of an AngularJS style guide. Within this guide there is a style called folder-by-feature , instead of folder-by-type , and I'm actually curious what's the best approach (in this example for Java) Let's say I have an application where I can retrieve Users & Pets, using services, controllers, repositories and ofcourse domain objects. Taking the folder-by-..... styles, we have two options for our packaging structure: 1. Folder-by-type com.example
├── domain
│ ├── User.java
│ └── Pet.java
├── controllers
│ ├── UserController.java
│ └── PetController.java
├── repositories
│ ├── UserRepository.java
│ └── PetRepository.java
├── services
│ ├── UserService.java
│ └── PetService.java
│ // and everything else in the project
└── MyApplication.java 2. Folder-by-feature com.example
├── pet
│ ├── Pet.java
│ ├── PetController.java
│ ├── PetRepository.java
│ └── PetService.java
├── user
│ ├── User.java
│ ├── UserController.java
│ ├── UserRepository.java
│ └── UserService.java
│ // and everything else in the project
└── MyApplication.java What would be a good approach, and what are the arguments to do so? | Folder-by-type only works on small-scale projects. Folder-by-feature is superior in the majority of cases. Folder-by-type is ok when you only have a small number of files (under 10 per type, let's say). As soon as you get multiple components in your project, all with multiple files of the same type, it gets very hard to find the actual file you are looking for. Therefore, folder-by-feature is better due to its scalability. However, if you use folder-by-feature, you end up losing information about the type of component a file represents (because it's no longer in a controller folder, let's say), so this too becomes confusing. There are 2 simple solutions for this. First, you can abide by common naming conventions that imply its type in the file name. For example, John Papa's popular AngularJS style guide has the following: Naming Guidelines Use consistent names for all components following a pattern that describes the component's feature then (optionally) its type. My recommended pattern is feature.type.js . There are 2 names for most assets: the file name ( avengers.controller.js ) the registered component name with Angular ( AvengersController ) Second, you can combine folder-by-type and folder-by-feature styles into folder-by-feature-by-type: com.example
├── pet
| ├── Controllers
│ | ├── PetController1.java
| | └── PetController2.java
| └── Services
│ ├── PetService1.java
│ └── PetService2.java
├── user
| ├── Controllers
│ | ├── UserController1.java
│ | └── UserController2.java
| └── Services
│ ├── UserService1.java
│ └── UserService2.java | {
"source": [
"https://softwareengineering.stackexchange.com/questions/338597",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/255074/"
]
} |
338,665 | I've taken a deep dive into the world of parsers recently, wanting to create my own programming language. However, I found out that there exist two somewhat different approaches of writing parsers: Parser Generators and Parser Combinators. Interestingly, I have been unable to find any resource that explained in what cases which approach is better; Rather, many resources (and persons) I queried about the subject did not know of the other approach, only explaining their approach as the approach and not mentioning the other at all: The famous Dragon book goes into lexing/scanning and mentions (f)lex, but does not mention Parser Combinators at all. Language Implementation Patterns heavily relies on the ANTLR Parser Generator built in Java, and does not mention Parser Combinators at all. The Introduction to Parsec tutorial on Parsec, which is a Parser Combinator in Haskell, does not mention Parser Generators at all. Boost::spirit , the best-known C++ Parser Combinator, does not mention Parser Generators at all. The great explanatory blog post You Could Have Invented Parser Combinators does not mention Parser Generators at all. Simple Overview: Parser Generator A Parser Generator takes a file written in a DSL that is some dialect of Extended Backus-Naur form , and turns it into source code that can then (when compiled) become a parser for the input language that was described in this DSL. This means that the compilation process is done in two separate steps.
Interestingly, Parser Generators themselves are also compilers (and many of them are indeed self-hosting ). Parser Combinator A Parser Combinator describes simple functions called parsers that all take an input as parameter, and try to pluck off the first character(s) of this input if they match. They return a tuple (result, rest_of_input) , where result might be empty (e.g. nil or Nothing ) if the parser was unable to parse anything from this input. An example would be an digit parser.
Other parsers can of course take parsers as first arguments (the final argument still remaining the input string) to combine them: e.g. many1 attempts to match another parser as many times as possible (but at least once, or it itself fails). You can now of course combine (compose) digit and many1 , to create a new parser, say integer . Also, a higher-level choice parser can be written that takes a list of parsers, trying each of them in turn. In this way, very complex lexers/parsers can be built. In languages supporting operator overloading, this also looks very much like EBNF, even though it is still written directly in the target language (and you can use all features of the target language you desire). Simple Differences Language: Parser Generators are written in a combination of the EBNF-ish DSL and the code that these statements should generate to when they match. Parser Combinators are written in the target language directly. Lexing/Parsing: Parser Generators have a very distinct difference between the 'lexer' (which splits a string into tokens that might be tagged to show what kind of value we are dealing with) and the 'parser' (which takes the output list of tokens from the lexer and attempts to combine them, forming an Abstract Syntax Tree). Parser Combinators do not have/need this distinction; usually, simple parsers perform the work of the 'lexer' and the more high-level parsers call these simpler ones to decide which kind of AST-node to create. Question However, even given these differences (and this list of differences is probably far from complete!), I cannot make an educated choice on when to use which one. I fail to see what the implications/consequences are of these differences. What problem properties would indicate that a problem would better be solved using a Parser Generator?
What problem properties would indicate that a problem would better be solved using a Parser Combinator? | I have done a lot of research these past few days, to understand better why these
separate technologies exist, and what their strengths and weaknesses are. Some of the already-existing answers hinted at some of their differences,
but they did not give the complete picture, and seemed to be somewhat opinionated, which is why this answer was written. This exposition is long, but important. bear with me (Or if you're impatient, scroll to the end to see a flowchart). To understand the differences between Parser Combinators and Parser Generators,
one first needs to understand the difference between the various kinds of parsing
that exist. Parsing Parsing is the process of analyzing of a string of symbols according to a formal grammar.
(In Computing Science,) parsing is used to be able to let a computer understand text written in a language,
usually creating a parse tree that represents the written text, storing the meaning of the different written parts
in each node of the tree. This parse tree can then be used for a variety of different purposes, such as
translating it to another language (used in many compilers),
interpreting the written instructions directly in some way (SQL, HTML), allowing tools like Linters to do their work, etc. Sometimes, a parse tree is not explicitly generated, but rather the action
that should be performed at each type of node in the tree is executed directly. This increases efficiency,
but underwater still an implicit parse tree exists. Parsing is a problem that is computationally difficult. There has been over fifty years of research on this subject,
but there is still much to learn. Roughly speaking, there are four general algorithms to let a computer parse input: LL parsing. (Context-free, top-down parsing.) LR parsing. (Context-free, bottom-up parsing.) PEG + Packrat parsing. Earley Parsing. Note that these types of parsing are very general, theoretical descriptions. There are multiple ways
to implement each of these algorithms on physical machines, with different tradeoffs. LL and LR can only look at Context-Free grammars (that is; the context around the tokens that are written is not
important to understand how they are used). PEG/Packrat parsing and Earley parsing are used a lot less: Earley-parsing is nice in that it can handle a whole lot
more grammars (including those that are not necessarily Context-Free) but it is less efficient (as claimed by the dragon book (section 4.1.1); I am not sure if these claims are still accurate). Parsing Expression Grammar + Packrat-parsing is a method that is relatively efficient and can also handle more grammars than both LL and LR, but hides ambiguities, as will quickly be touched on below. LL (Left-to-right, Leftmost derivation) This is possibly the most natural way to think about parsing.
The idea is to look at the next token in the input string and then decide which one of maybe multiple possible recursive calls should be taken
to generate a tree structure. This tree is built 'top-down', meaning that we start at the root of the tree, and travel the grammar rules in the same way as
we travel through the input string. It can also be seen as constructing a 'postfix' equivalent for the 'infix' token stream that is being read. Parsers performing LL-style parsing can be written to look very much like the original grammar that was specified.
This makes it relatively easy to understand, debug and enhance them. Classical Parser Combinators are nothing more
than 'lego pieces' that can be put together to build an LL-style parser. LR (Left-to-right, Rightmost derivation) LR parsing travels the other way, bottom-up:
At each step, the top element(s) on the stack are compared to the list of grammar, to see if they could be reduced to a higher-level rule in the grammar. If not, the next token from the input stream is shift ed and placed on top
of the stack. A program is correct if at the end we end up with a single node on the stack which represents the starting rule from
our grammar. Lookahead In either of these two systems, it sometimes is necessary to peek at more tokens from the input
before being able to decide which choice to make. This is the (0) , (1) , (k) or (*) -syntax you see after
the names of these two general algorithms, such as LR(1) or LL(k) . k usually stands for 'as much as your grammar needs',
while * usually stands for 'this parser performs backtracking', which is more powerful/easy to implement, but has
a much higher memory and time usage than a parser that can just keep on parsing linearly. Note that LR-style parsers already have many tokens on the stack when they might decide to 'look ahead', so they already have more information
to dispatch on. This means that they often need less 'lookahead' than an LL-style parser for the same grammar. LL vs. LR: Ambiguity When reading the two descriptions above, one might wonder why LR-style parsing exists,
as LL-style parsing seems a lot more natural. However, LL-style parsing has a problem: Left Recursion . It is very natural to write a grammar like: expr ::= expr '+' expr | term
term ::= integer | float But, a LL-style parser will get stuck in an infinite recursive loop
when parsing this grammar: When trying out the left-most possibility of the expr rule, it
recurses to this rule again without consuming any input. There are ways to resolve this problem. The simplest is to rewrite your grammar so that this
kind of recursion does not happen any more: expr ::= term expr_rest
expr_rest ::= '+' expr | ϵ
term ::= integer | float (Here, ϵ stands for the 'empty string') This grammar now is right recursive. Note that it immediately is a lot more difficult to read. In practice, left-recursion might happen indirectly with many other steps in-between.
This makes it a hard problem to look out for.
But trying to solve it makes your grammar harder to read. As Section 2.5 of the Dragon Book states: We appear to have a conflict: on the one hand we need a grammar that
facilitates translation, on the other hand we need a significantly different grammar that facilitates parsing.
The solution is to begin with the grammar for easy translation and carefully transform it to facilitate parsing. By eliminating the left recursion
we can obtain a grammar suitable for use in a predictive recursive-descent translator. LR-style parsers do not have the problem of this left-recursion, as they build the tree from the bottom-up. However , the mental translation of a grammar like above to an LR-style parser (which is often implemented as a Finite-State Automaton ) is very hard (and error-prone) to do, as often there are hundreds or thousands of states + state transitions to consider.
This is why LR-style parsers are usually generated by a Parser Generator, which is also known as a 'compiler compiler'. How to resolve Ambiguities We saw two methods to resolve Left-recursion ambiguities above:
1) rewrite the syntax
2) use an LR-parser. But there are other kinds of ambiguities which are harder to solve:
What if two different rules are equally applicable at the same time? Some common examples are: arithmetic expressions. the Dangling Else Both LL-style and LR-style parsers have problems with these. Problems with parsing
arithmetic expressions can be solved by introducing operator precedence.
In a similar way, other problems like the Dangling Else can be solved, by picking one precedence behaviour
and sticking with it. (In C/C++, for instance, the dangling else always belongs to the closest 'if'). Another 'solution' to this is to use Parser Expression Grammar (PEG): This is similar to the
BNF-grammar used above, but in the case of an ambiguity,
always 'pick the first'. Of course, this does not really 'solve' the problem,
but rather hide that an ambiguity actually exists: The end users might not know
which choice the parser makes, and this might lead to unexpected results. More information that is a whole lot more in-depth than this post, including why it is impossible
in general to know if your grammar does not have any ambiguities and the implications of this is
the wonderful blog article LL and LR in context: Why parsing tools are hard .
I can highly recommend it; it helped me a lot to understand all the things I am talking about right now. 50 years of research But life goes on. It turned out that 'normal' LR-style parsers implemented as finite state automatons often needed thousands of
states + transitions, which was a problem in program size. So, variants such as Simple LR (SLR) and LALR (Look-ahead LR) were written
that combine other techniques to make the automaton smaller, reducing the disk and memory footprint of the parser programs. Also, another way to resolve the ambiguities listed above is to use generalized techniques in which, in the case of an ambiguity, both
possibilities are kept and parsed: Either one might fail to parse down the line (in which case the other possibility is the 'correct' one),
as well as returning both (and in this way showing that an ambiguity exists) in the case they both are correct. Interestingly, after the Generalized LR algorithm was described,
it turned out that a similar approach could be used to implement Generalized LL parsers ,
which is similarly fast ( $O(n^3)$ time complexity for ambiguous grammars, $ O(n) $ for completely unambiguous grammars, albeit with more
bookkeeping than a simple (LA)LR parser, which means a higher constant-factor)
but again allow a parser to be written in recursive descent (top-down) style that is a lot more natural to write and debug. Parser Combinators, Parser Generators So, with this long exposition, we are now arriving at the core of the question: What is the difference of Parser Combinators and Parser Generators, and when should one be used over the other? They are really different kinds of beasts: Parser Combinators were created because people were writing top-down parsers and realized
that many of these had a lot in common . Parser Generators were created because people were looking to build parsers that did not have the problems that
LL-style parsers had (i.e. LR-style parsers), which proved very hard to do by hand. Common ones include Yacc/Bison, that implement (LA)LR). Interestingly, nowadays the landscape is muddled somewhat: It is possible to write Parser Combinators that work with the GLL algorithm , resolving the ambiguity-issues that classical LL-style parsers had, while being just as readable/understandable as all kinds of top-down parsing. Parser Generators can also be written for LL-style parsers. ANTLR does exactly that, and uses other heuristics (Adaptive LL(*)) to resolve the ambiguities
that classical LL-style parsers had. In general, creating an LR parser generator and and debugging the output of an (LA)LR-style parser generator running on your grammar
is difficult, because of the translation of your original grammar to the 'inside-out' LR form.
On the other hand, tools like Yacc/Bison have had many years of optimisations, and seen a lot of use in the wild, which means
that many people now consider it as the way to do parsing and are sceptical towards new approaches. Which one you should use, depends on how hard your grammar is, and how fast the parser needs to be.
Depending on the grammar, one of these techniques (/implementations of the different techniques) might be faster, have a smaller memory footprint, have a smaller disk footprint,
or be more extensible or easier to debug than the others. Your Mileage May Vary . Side note: On the subject of Lexical Analysis. Lexical Analysis can be used both for Parser Combinators and Parser Generators.
The idea is to have a 'dumb' parser that is very easy to implement (and therefore fast) that performs a first pass over your source code,
removing for instance repeating white spaces, comments, etc, and possibly 'tokenizing' in a very coarse way the different elements that make
up your language. The main advantage is that this first step makes the real parser a lot simpler (and because of that possibly faster).
The main disadvantage is that you have a separate translation step, and e.g. error reporting with line- and column numbers becomes harder because of
the removal of white-space. A lexer in the end is 'just' another parser and can be implemented using any of the techniques above. Because of its simplicity, often other techniques are used
than for the main parser, and for instance extra 'lexer generators' exist. Tl;Dr: Here is a flowchart that is applicable to most cases: | {
"source": [
"https://softwareengineering.stackexchange.com/questions/338665",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/41643/"
]
} |
338,666 | In this article the author claims that Sometimes, it is required to expose an operation in the API that inherently is non RESTful. and that If an API has too many actions, then that’s an indication that either it was designed with an RPC viewpoint rather than using RESTful principles, or that the API in question is naturally a better fit for an RPC type model. This reflects what I have read and heard elsewhere as well. However I find this quite confusing and I would like to get a better understanding of the matter. Example I: Shutting down a VM through a REST interface There are, I think, two fundamentally different ways to model a shutdown of a VM. Each way might have a few variations, but let's concentrate on the most fundamental differences for now. 1. Patch the resource's state property PATCH /api/virtualmachines/42
Content-Type:application/json
{ "state": "shutting down" } (Alternatively, PUT on the sub-resource /api/virtualmachines/42/state .) The VM will be shutting down in the background and at some later point in time depending of wether shutting down will succeed or not the state might be internally updated with "power off". 2. PUT or POST on the resource's actions property PUT /api/virtualmachines/42/actions
Content-Type:application/json
{ "type": "shutdown" } The result is exactly the same as in the first example. The state will be updated to "shutting down" immediately and maybe eventually to "power off". Are both designs RESTful? Which design is better? Example II: CQRS What if we have a CQRS domain with many such "actions" (aka commands) that might potentially lead to updates of multiple aggregates or cannot be mapped to CRUD operations on concrete resources and sub-resources? Should we try to model as many commands as concrete creates or updates on concrete resources, where ever possible (following the first approach from example I) and use "action endpoints" for the rest? Or should we map all the commands to action endpoints (as in the second approach of example I)? Where should we draw the line? When does the design become less RESTful? Is a CQRS model a better fit for an RPC like API? According to the quoted text above it is, as I understand it. As you can see from my many questions, I am a little bit confused about this topic. Can you help me to get a better understanding of it? | In the first case (shut-down of VMs), I'd consider none of the OP alternatives RESTful. Granted, if you use the Richardson maturity model as a yardstick, they are both leve 2 APIs because they use resources and verbs. Neither of them, though, use hypermedia controls, and in my opinion, that's the only type of REST that differentiates RESTful API design from RPC. In other words, stick with level 1 and 2, and you're going to have an RPC-style API in most cases. In order to model two different ways of shutting down a VM, I'd expose the VM itself as a resource that (among other things) contains links: {
"links": [{
"rel": "shut-down",
"href": "/vms/1234/fdaIX"
}, {
"rel": "power-off",
"href": "/vms/1234/CHTY91"
}],
"name": "Ploeh",
"started": "2016-08-21T12:34:23Z"
} If a client wishes to shut down the Ploeh VM, it ought to follow the link with the shut-down relationship type. (Normally, as outlined in the RESTful Web Services Cookbook , you'd use an IRI or more elaborate identification scheme for relationship types, but I chose to keep the example as simple as possible.) In this case, there's little other information to provide with the action, so the client should simple make an empty POST against the URL in the href : POST /vms/1234/fdaIX HTTP/1.1 (Since this request has no body, it'd be tempting to model this as a GET request, but GET requests should have no observable side-effects, so POST is more correct.) Likewise, if a client wants to power off the VM, it'll follow the power-off link instead. In other words, the relationship types of the links provide affordances that indicate intent. Each relationship type has a specific semantic significance. This is the reason we sometimes talk about the semantic web . In order to keep the example as clear as possible, I intentionally obscured the URLs in each link. When the hosting server receives the incoming request, it'd know that fdaIX means shut down , and CHTY91 means power off . Normally, I'd just encode the action in the URL itself, so that the URLs would be /vms/1234/shut-down and /vms/1234/power-off , but when teaching, that blurs the distinction between relationship types (semantics) and URLs (implementation details). Depending on which clients you have, you may consider making RESTful URLs non-hackable . CQRS When it comes to CQRS, one of the few things that Greg Young and Udi Dahan agrees about is that CQRS isn't a top-level architecture . Thus, I'd be cautious about making a RESTful API too CQRS-like, because that'd mean that clients become part of your architecture. Often, the driving force behind a real (level 3) RESTful API is that you want to be able to evolve your API without breaking clients, and without having control of clients. If that's your motivation, then CQRS wouldn't be my first choice. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/338666",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/120304/"
]
} |
339,104 | I got this idea from this question on stackoverflow.com The following pattern is common: final x = 10;//whatever constant value
for(int i = 0; i < Math.floor(Math.sqrt(x)) + 1; i++) {
//...do something
} The point I'm trying to make is the conditional statement is something complicated and doesn't change. Is it better to declare it in the initialization section of the loop, as such? final x = 10;//whatever constant value
for(int i = 0, j = Math.floor(Math.sqrt(x)) + 1; i < j; i++) {
//...do something
} Is this more clear? What if the conditional expression is simple such as final x = 10;//whatever constant value
for(int i = 0, j = n*n; i > j; j++) {
//...do something
} | What I'd do is something like this: void doSomeThings() {
final x = 10;//whatever constant value
final limit = Math.floor(Math.sqrt(x)) + 1;
for(int i = 0; i < limit; i++) {
//...do something
}
} Honestly the only good reason to cram initializing j (now limit ) into the loop header is to keep it correctly scoped. All it takes to make that a non issue is a nice tight enclosing scope. I can appreciate the desire to be fast but don't sacrifice readability without a real good reason. Sure, the compiler may optimize, initializing multiple vars may be legal, but loops are hard enough to debug as it is. Please be kind to the humans. If this really does turn out to be slowing us down it's nice to understand it enough to fix it. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/339104",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/221185/"
]
} |
339,125 | I've been implementing a network protocol, and I require packets to have unique identifiers. So far, I've just been generating random 32-bit integers, and assuming that it is astronomically unlikely that there will be a collision during the lifespan of a program/connection. Is this generally considered an acceptable practice in production code, or should one devise a more complex system to prevent collisions? | Beware the birthday paradox . Suppose you are generating a sequence of random values (uniformly, independently) from a set of size N (N = 2^32 in your case). Then, the rule of thumb for the birthday paradox states that once you have generated about sqrt(N) values, there is at least a 50% chance that a collision has occurred, that is, that there are at least two identical values in the generated sequence. For N = 2^32, sqrt(N) = 2^16 = 65536. So after you have generated about 65k identifiers, it is more likely that two of them collide than not! If you generate an identifier per second, this would happen in less than a day; needless to say, many network protocols operate way faster than that. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/339125",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/183293/"
]
} |
339,230 | The following commentator writes : Microservices shift your organizational dysfunction from a compile time problem to a run time problem. This commentator expands on the issue saying: Feature not bug. Run time problem => prod issues => stronger, faster feedback about dysfunction to those responsible Now I get that with microservices you: potentially increase latency of your through-put – which is a production and run-time concern. increase the number of “network interfaces” in your code where there could be potential run-time errors of parsing. can potentially do blue-green deployments. Those could be held-up by interface mismatches (see network interfaces). But if blue-green deployments work then it is more of a run-time concern. My question is: What does it mean that shifting to microservices creates a run-time problem? | I have a problem. Let's use Microservices! Now I have 13 distributed problems. Dividing your system into encapsulated, cohesive, and decoupled components is a good idea. It allows you to tackle different problems separately. But you can do that perfectly well in a monolithic deployment (see Fowler: Microservice Premium ). After all, this is what OOP has been teaching for many decades! If you decide to turn your components into microservices, you do not gain any architectural advantage. You gain some flexibility regarding technology choice and possibly (but not necessarily!) some scalability. But you are guaranteed some headache stemming from (a) the distributed nature of the system, and (b) the communication between components. Choosing microservices means that you have other problems that are so pressing that you are willing to use microservices despite these problems. If you are unable to design a monolith that is cleanly divided into components, you will also be unable to design a microservice system. In a monolithic code base, the pain will be fairly obvious. Ideally, the code will simply not compile if it is horribly broken. But with microservices, each service may be developed separately, possibly even in different languages. Any problems in the interaction of components will not become apparent until you integrate your components, and at that point it's already too late to fix the overall architecture. The No 1 source of bugs is interface mismatch. There may be glaring mistakes like a missing parameter, or more subtle examples like forgetting to check an error code, or forgetting to check a precondition before calling a method. Static typing detects such problems as early as possible: in your IDE and in the compiler, before the code ever runs. Dynamic systems don't have this luxury. It won't blow up until that faulty code is executed. The implications for microservices are terrifying. Microservices are inherently dynamic. Unless you move to a formal service description language, you can't verify any kind of correctness of your interface usage. you have to test, test, test! But tests are expensive and usually not exhaustive, which leaves the possibility that problems might still exist in production. When will that problem become apparent? Only when that faulty path is taken, at run time, in production. The notion that prod issues would lead to faster feedback is hilariously dangerously wrong, unless you are amused by the possibility of data loss. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/339230",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/13382/"
]
} |
339,285 | Occasionally, the most logical name for something (e.g. a variable) is a reserved keyword in the language or environment of choice. When there is no equally appropriate synonym, how does one name it? I imagine there are best practice heuristics for this problem. These could be provided by the creators or governors of programming languages and environments. For example, if python.org (or Guido van Rossum) says how to deal with it in Python, that would be a good guideline in my book. An MSDN link on how to deal with it in C# would be good too. Alternatively, guidelines provided by major influencers in software engineering should also be valuable. Perhaps Google/Alphabet has a nice style guide that teaches us how to deal with it? Here's just an example: in the C# language, "default" is a reserved keyword. When I use an enum, I might like to name the default value "default" (analogous to "switch" statements), but can't. (C# is case-sensitive, and enum constants should be capitalized, so "Default" is the obvious choice here, but let's assume our current style guide dictates all enum constants are to be lower-case.) We could consider the word "defaultus", but this does not adhere to the Principle of Least Astonishment . We should also consider "standard" and "initial", but unfortunately "default" is the word that exactly conveys its purpose in this situation. | For an enum option you should use title case like Default . Since C# is case-sensitive it will not collide with the reserved keyword. See .net Naming Guidelines . Since all public members should be title case in .net, and all reserved names are lower case, you shouldn't really encounter this except with local variables (including parameters). And locals would typically have nouns or phrases as names, so it is pretty rare the most natural name would collide with a keyword. Eg. defaultValue would typically be a more natural name than default . So in practice this is not a big issue. In C# you can use the " @ " prefix to escape reserved keywords so they can be used as identifers (like @default ). But this should only be used if you really have no other option, i.e. if you are interfacing with a third-party library which uses a reserved keywords as an identifier. Of course other languages have different syntax and keywords and therefore different solutions this problem. SQL have quite a lot of keywords, but it is very common to simply escape identifiers, like [Table] . Some even do so for all identifiers, regardless of whether they clash with a keyword or not. (After all, a clashing keyword could be introduced in the future!) Powershell (and a bunch of other scripting languages) prefixes all variables with a sigil like $ which means they will never collide with keywords. Lisp does not have keywords at all, at least not in the conventional sense. Python have an officially recognized convention in PEP-8 : Always use cls for the first argument to class methods. If a function argument's name clashes with a reserved keyword, it is
generally better to append a single trailing underscore rather than
use an abbreviation or spelling corruption. Thus class_ is better than
clss. (Perhaps better is to avoid such clashes by using a synonym.) Some languages like Brainfuck or Whitespace avoids defining words at all, elegantly sidestepping the problem. In short there is no language-independent answer to your question, since it highly depending on the syntax and conventions of the specific language. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/339285",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/39958/"
]
} |
339,321 | I'm looking at the upcoming Visual Studio 2017 . Under the section titled Boosted Productivity there is an image of Visual Studio being used to replace all occurrences of var with the explicit type. The code apparently has several problems that Visual Studio has identified as 'needs fixing'. I wanted to double-check my understanding of the use of var in C# so I read an article from 2011 by Eric Lippert called Uses and misuses of implicit typing . Eric says: Use var when you have to; when you are using anonymous types. Use var when the type of the declaration is obvious from the initializer, especially if it is an object creation. This eliminates redundancy. Consider using var if the code emphasizes the semantic “business purpose” of the variable and downplays the “mechanical” details of its storage. Use explicit types if doing so is necessary for the code to be correctly understood and maintained. Use descriptive variable names regardless of whether you use “var”. Variable names should represent the semantics of the variable, not details of its storage; “decimalRate” is bad; “interestRate” is good. I think most of the var usage in the code is probably ok. I think it would be ok to not use var for the bit that reads ... var tweetReady = workouts [ ... ] ... because maybe it's not 100% immediate what type it is but even then I know pretty quickly that it's a boolean . The var usage for this part ... var listOfTweets = new List<string>(); ... looks to me exactly like good usage of var because I think it's redundant to do the following: List<string> listOfTweets = new List<string>(); Although based on what Eric says the variable should probably be tweets rather than listOfTweets . What would be the reason for changing the all of the var use here? Is there something wrong with this code that I'm missing? | TL;DR: no, Microsoft are not discouraging the use of 'var' in C#. The image is simply lacking context to explain why it's complaining. If you install VS2017 RC and open up the Options panel and go to Text Editor -> C# , you'll see a new section: Code Style . This is similar to what ReSharper has offered for a while: a set of configurable rules for coding styles. It includes three options around the use of var : for build-in types, when the variable type is apparent and "Elsewhere". In each case, you can specify "prefer explicit type" or "prefer var" and set the notification level to "none", "suggestion", "warning" or "error": | {
"source": [
"https://softwareengineering.stackexchange.com/questions/339321",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/81480/"
]
} |
339,358 | I have a client who insisted that we keep our new development separate from the main branches for the entirety of 2016. They had 3-4 other teams working on the application in various capacities. Numerous large changes have been made (switching how dependency injection is done, cleaning up code with ReSharper, etc). It has now fallen on me to merge main into our new dev branch to prepare to push our changes up the chain. On my initial merge pull, TFS reported ~6500 files with conflict resolution. Some of these will be easy, but some of them will be much more difficult (specifically some of the javascript, api controllers, and services supporting these controllers). Is there an approach I can take that will make this easier for me? To clarify, I expressed much concern with this approach multiple times along the way. The client was and is aware of the difficulties with this. Because they chose to short on QA staff (1 tester for 4 devs, no automated testing, little regression testing), they insisted that we keep our branch isolated from the changes in the main branch under the pretense that this would reduce the need for our tester to know about changes being made elsewhere. One of the bigger issues here is an upgrade to the angular version and some of the other third party softwares --unfortunately we have no come up with a good way to build this solution until all the pieces are put back into place. | There would have been a simple way which had kept your new development separate from the main branch without bringing you into this unfortunate situation: any change from the trunk should have been merged into your dev branch on a daily basis . (Was your client really so shortsighted that he could not anticipate that your branch needs to be remerged back into the main line some day?) Anyway, best approach is IMHO trying to redo what should have happened on first hand: identify the semantics of the changes on the main line for day 1 after the branch was created. Apply them to your current code base as well as you can. If it was a "local change", it should be simple, if it was a "cross cutting refactoring" like renaming a widely used class, apply it in a semantically equivalent manner to your current code base. Hopefully during that year no contradictory cross-cutting changes in the code base were made on "your" branch, otherwise this can become a real brain-teaser test the result (did I mention you need a good test suite for this task)? Fix all bugs revealed by the test now repeat this process for the changes on the main line for day 2, then day 3, and so on. This might work when the teams strictly obeyed to the classic rules of version control ("only commit compilable, tested states" and "check in early and often"). After 365 repetitions (or 250, if you are lucky and you can bundle the work for weekend changes), you will be almost finished (almost, because you need to add the number of changes which will happen to the main line during the integration period). The final step will be to merge the updated dev branch into the trunk again (so you don't loose the trunk's history). This should be easy, because technically it should be only a replacement of the affected files. And yes, I am serious, there is probably no shortcut to this. It might turn out that "daily portions" might be sometimes too small, but I would not expect this, I guess it is more likely daily portions can turn out beeing too big. I hope your client pays you really well for this, and that this is so expensive for him that he will learn from his failure. I should add that you can try this also with switched sides - reintegrating the changes from your branch in small portions to the main line. This might be simpler when on your dev branch there were much fewer changes than on the trunk, or most of the changes happened in new source files which are currently not part of the trunk. One can see this as "porting" a feature from a product A (the dev branch) to a somewhat different product B (current state of the trunk). But if the majority of cross-cutting refactorings were done on the main line, and they affect your new code (the 6500 merge collisions seem to be some evidence for this), it might be easier the way I described it first. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/339358",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/258451/"
]
} |
339,359 | I still remember good old days of repositories. But repositories used to grow ugly with time. Then CQRS got mainstream. They were nice, they were a breath of fresh air. But recently I've been asking myself again and again why don't I keep the logic right in a Controller's Action method (especially in Web Api where action is some kind of command/query handler in itself). Previously I had a clear answer for that: I do it for testing as it's hard to test Controller with all those unmockable singletons and overall ugly ASP.NET infrastructure. But times have changed and ASP.NET infrastructure classes are much more unit tests friendly nowadays (especially in ASP.NET Core). Here's a typical WebApi call: command is added and SignalR clients are notified about it: public void AddClient(string clientName)
{
using (var dataContext = new DataContext())
{
var client = new Client() { Name = clientName };
dataContext.Clients.Add(client);
dataContext.SaveChanges();
GlobalHost.ConnectionManager.GetHubContext<ClientsHub>().ClientWasAdded(client);
}
} I can easily unit test/mock it. More over, thanks to OWIN I can setup local WebApi and SignalR servers and make an integration test (and pretty fast by the way). Recently I felt less and less motivation to create cumbersome Commands/Queries handlers and I tend to keep code in Web Api actions. I make an exception only if logic is repeated or it's really complicated and I want to isolate it. But I'm not sure if I'm doing the right thing here. What is the most reasonable approach for managing logic in a typical modern ASP.NET application? When is it reasonable to move your code to Commands and Queries handlers? Are there any better patterns? Update. I found this article about DDD-lite approach. So it seems like my approach of moving complicated parts of code to commands/queries handlers could be called CQRS-lite. | Is CQRS a relatively complicated and costly pattern ? Yes. Is it over-engineering ? Absolutely not. In the original article where Martin Fowler talks about CQRS you can see a lot of warnings about not using CQRS where it's not applicable: Like any pattern, CQRS is useful in some places, but not in others. CQRS is a significant mental leap for all concerned, so shouldn't be tackled unless the benefit is worth the jump . While I have come across successful uses of CQRS, so far the majority of cases I've run into have not been so good ... Despite these benefits, you should be very cautious about using CQRS . ...adding CQRS to such a system can add significant complexity . My emphasis above. If your application is using CQRS for everything, than it's not CQRS that's over-engineered, it's your application. This is an excellent pattern to solve some specific problems, especially high performance/high volume applications where write concurrency may be a major concern. And it may not be the entire application, but only a small part of it. A live example from my work, we employ CQRS in the Order Entry system, where we cannot lose any orders, and we do have spikes where thousands of orders come at the same time from different sources, in some specific hours. CQRS helped us to keep the system alive and responsive, while allowing us to scale well the backend systems to process these orders faster when needed (more backend servers) and slower/cheaper when not needed. CQRS is perfect for the problem where you have several actors collaborating in the same set of data or writing to the same resource. Other than that, spreading a pattern made to solve a single problem to all your problems will just create more problems to you. Some other useful links: http://udidahan.com/2011/04/22/when-to-avoid-cqrs/ http://codebetter.com/gregyoung/2012/09/09/cqrs-is-not-an-architecture-2/ | {
"source": [
"https://softwareengineering.stackexchange.com/questions/339359",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/7369/"
]
} |
339,384 | For example, to keep a CPU on in Android, I can use code like this: PowerManager powerManager = (PowerManager)getSystemService(POWER_SERVICE);
WakeLock wakeLock = powerManager.newWakeLock(PowerManager.PARTIAL_WAKE_LOCK, "abc");
wakeLock.acquire(); but I think the local variables powerManager and wakeLock can be eliminated: ((PowerManager)getSystemService(POWER_SERVICE))
.newWakeLock(PowerManager.PARTIAL_WAKE_LOCK, "MyWakelockTag")
.acquire(); similar scene appears in iOS alert view, eg: from UIAlertView *alert = [[UIAlertView alloc]
initWithTitle:@"my title"
message:@"my message"
delegate:nil
cancelButtonTitle:@"ok"
otherButtonTitles:nil];
[alert show];
-(void)alertView:(UIAlertView *)alertView clickedButtonAtIndex:(NSInteger)buttonIndex{
[alertView release];
} to: [[[UIAlertView alloc]
initWithTitle:@"my title"
message:@"my message"
delegate:nil
cancelButtonTitle:@"ok"
otherButtonTitles:nil] show];
-(void)alertView:(UIAlertView *)alertView clickedButtonAtIndex:(NSInteger)buttonIndex{
[alertView release];
} Is it a good practice to eliminate a local variable if it is just used once in the scope? | Code is read much more often than it is written, so you should take pity on the poor soul who will have to read the code six months from now (it may be you) and strive for the clearest, easiest to understand code. In my opinion, the first form, with local variables, is much more understandable. I see three actions on three lines, rather than three actions on one line. And if you think you are optimizing anything by getting rid of local variables, you are not. A modern compiler will put powerManager in a register 1 , whether a local variable is used or not, to call the newWakeLock method. The same is true for wakeLock . So you end up with the same compiled code in either case. 1 If you had a lot of intervening code between the declaration and the use of the local variable, it might go on the stack, but it is a minor detail. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/339384",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/196142/"
]
} |
339,495 | A recent bug fix required me to go over code written by other team members, where I found this (it's C#): return (decimal)CostIn > 0 && CostOut > 0 ? (((decimal)CostOut - (decimal)CostIn) / (decimal)CostOut) * 100 : 0; Now, allowing there's a good reason for all those casts, this still seems very difficult to follow. There was a minor bug in the calculation and I had to untangle it to fix the issue. I know this person's coding style from code review, and his approach is that shorter is almost always better. And of course there's value there: we've all seen unnecessarily complex chains of conditional logic that could be tidied with a few well-placed operators. But he's clearly more adept than me at following chains of operators crammed into a single statement. This is, of course, ultimately a matter of style. But has anything been written or researched on recognizing the point where striving for code brevity stops being useful and becomes a barrier to comprehension? The reason for the casts is Entity Framework. The db needs to store these as nullable types. Decimal? is not equivalent to Decimal in C# and needs to be cast. | To answer your question about extant research But has anything been written or researched on recognizing the point where striving for code brevity stops being useful and becomes a barrier to comprehension? Yes, there has been work in this area. To get an understanding of this stuff, you have to find a way to compute a metric so that comparisons can be made on a quantitative basis (rather than just performing the comparison based on wit and intuition, as the other answers do). One potential metric that has been looked at is Cyclomatic Complexity ÷ Source Lines of Code ( SLOC ) In your code example, this ratio is very high, because everything has been compressed onto one line. The SATC has found the most effective evaluation is a combination of size and [Cyclomatic] complexity. The modules with both a high complexity and a large size tend to have the lowest reliability. Modules with low size and high complexity are also a reliability risk because they tend to be very terse code, which is difficult to change or modify. Link Here are a few references if you are interested: McCabe, T. and A. Watson (1994), Software Complexity (CrossTalk: The Journal of Defense Software Engineering). Watson, A. H., & McCabe, T. J. (1996). Structured Testing: A Testing Methodology Using the Cyclomatic Complexity Metric (NIST Special Publication 500-235). Retrieved May 14, 2011, from McCabe Software web site: http://www.mccabe.com/pdf/mccabe-nist235r.pdf Rosenberg, L., Hammer, T., Shaw, J. (1998). Software Metrics and Reliability (Proceedings of IEEE International Symposium on Software Reliability Engineering). Retrieved May 14, 2011, from Penn State University web site: http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.104.4041&rep=rep1&type=pdf My opinion and solution Personally, I have never valued brevity, only readability. Sometimes brevity helps readibility, sometimes it does not. What is more important is that you are writing Really Obvious Code(ROC) instead of Write-Only Code (WOC). Just for fun, here's how I would write it, and ask members of my team to write it: if ((costIn <= 0) || (costOut <= 0)) return 0;
decimal changeAmount = costOut - costIn;
decimal changePercent = changeAmount / costOut * 100;
return changePercent; Note also the introduction of the working variables has the happy side effect of triggering fixed-point arithmetic instead of integer arithmetic, so the need for all those casts to decimal is eliminated. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/339495",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/22742/"
]
} |
339,734 | In JS you can return a Boolean having custom properties. Eg. when Modernizr tests for video support it returns true or false but the returned Boolean (Bool is first class object in JS) has properties specifying what formats are supported. At first it surprised me a bit but then I began to like the idea and started to wonder why it seems to be used rather sparingly? It looks like an elegant way of dealing with all those scenarios where you basically want to know if something is true or false but you may be interested in some additional info that you can define without defining a custom return object or using a callback function prepared to accept more parameters. This way you retain a very universal function signature without compromising capacity for returning more complex data. There are 3 arguments against it that I can imagine: It's a bit uncommon/unexpected when it's probably better for any interface to be clear and not tricky. This may be a straw man argument, but with it being a bit of an edge case I can imagine it quietly backfires in some JS optimizer, uglifier, VM or after a minor clean up language specification change etc. There is better - concise, clear and common - way of doing exactly the same. So my question is are there any strong reasons to avoid using Booleans with additional properties? Are they a trick or a treat? Plot twists warning. Above is the original question in full glory. As Matthew Crumley and senevoldsen both pointed it is based on a false (falsy?) premise. In fine JS tradition what Modernizr does is a language trick and a dirty one. It boils down to JS having a primitive bool which if set to false will remain false even after TRYING to add props (which fails silently) and a Boolean object which can have custom props but being an object is always truthy. Modernizr returns either boolean false or a truthy Boolean object. My original question assumed the trick works differently and so most popular answers deal with (perfectly valid) coding standards aspect. However I find the answers debunking the whole trick most helpful (and also the ultimate arguments against using the method) so I'm accepting one of them. Thanks to all the participants! | Congratulations, you've discovered objects. The reason not to do this is called the principle of least astonishment . Being surprised by a design is not a good thing. There is nothing wrong with bundling together this information but why would you want to hide it in a Bool? Put it in something you'd expect to have all this info. Bool included. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/339734",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/196173/"
]
} |
339,794 | I am a budding software engineer (now a sophomore, major in CS) and I really struggle to understand other people's programs. I want to know if this skill (or lack of it) can be a handicap for me, and if yes, then how can I develop it? | It is essential. The way you develop it is by writing your own code (lots of it), and yes, struggling through reading other people's code. The problem, of course, is that not everyone thinks the way you do. I was in a first-year Java class a long time ago, and we were given an assignment. Contrary to what I believed (which was that the answers would converge on three or four common solutions), everyone in the class had a unique solution to the assignment. It follows that you should be reading good code. This is one of the reasons that Design Patterns have become so popular, and why you should study them. Design Patterns provide a common vocabulary for programmers to communicate with, and tune your mind for "better" ways to solve computing problems. You should also study algorithms and data structures. Corollary: You should always be striving to write code that other developers can readily understand. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/339794",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/258188/"
]
} |
339,807 | How do you collaboratively develop software in a team of 4-5 developers without acceptance criteria, without knowing what the testers will be testing for and with multiple(2-3) people acting as product owner. All we have is a sketchy 'spec' with some screen shots and a few bullet points. We've been told that it will be easy so these things are not required. I'm at a loss on how to proceed. Additional Info We have been given a hard deadline. The customer is internal, we have a product owner in theory but at least 3 people testing the software could fail a work item simply because it doesn't work how they think it should work and there is little to no transparency of what they expect or what they are actually testing for until it has failed. product owner(s) are not readily available to answer questions or give feedback. There are no regular scheduled meetings or calls with them and feedback can take days. I can understand that we cannot have a perfect spec but i thought it would be 'normal' to have acceptance criteria for the things we are actually working on in each sprint. | An iterative process will achieve this nicely, without detailed specifications. Simply create a sketchy prototype, ask for feedback from the customer, make changes based on the feedback, and repeat this process until the application is completed. Whether the customer is patient enough to do it this way is a different question. Some clients and developers actually prefer this process; since the customer is always hands-on, he'll (eventually) get exactly what he wants. It should go without saying that you're not going to work a fixed-cost or fixed time contract this way. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/339807",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/94888/"
]
} |
339,866 | I'm comparing two technologies in order to reach a recommendation for which one should be used by a company. Technology A's code is interpreted while technology B's code is compiled to machine code. In my comparison I state that tech B in general would have better performance since it doesn't have the additional overhead of the interpretation process. I also state that since a program could be written in many ways it is still possible a program written in tech A could outperform one written in tech B. When I submitted this report for review, the reviewer stated that I offered no clear reason why in general the overhead of the interpretation process would be large enough that we could conclude that tech B's performance would be better. So my question is can we ever say anything about the performance of compiled/interpreted technologies? If we can say compiled is generally faster then interpreted, how could I convince the reviewer of my point? | No. In general, the performance of a language implementation is primarily dependent on the amount of money, resources, manpower, research, engineering, and development spent on it. And specifically, the performance of a particular program is primarily dependent on the amount of thought put into its algorithms. There are some very fast interpreters out there, and some compilers that generate very slow code. For example, one of the reasons Forth is still popular, is because in a lot of cases, an interpreted Forth program is faster than the equivalent compiled C program, while at the same time, the user program written in Forth plus the Forth interpreter written in C is smaller than the user program written in C. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/339866",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/248193/"
]
} |
339,884 | When creating time estimates for tickets should the time taken for testers (QAs) be included in a tickets estimate? We have previously always estimated without the testers time but we are talking about always including it. It makes sense for our current sprint, the last before a release, as we need to know the total time tickets will take with one week to go. I always understood estimation was just for developer time as that tends to be the limiting resource in teams. A colleague is saying that wherever they have worked before tester time has also been included. To be clear, this is for a process where developers are writing unit, integration and UI tests with good coverage. | My recommendation: You either include testing time in the ticket, or add a ticket to represent the testing task itself. Any other approach causes you to underestimate the real work needed. While developer time is often a bottleneck, in my experience, there are many teams constrained on test. Assuming the limiting resource is one or the other without evidence, can bite you. As your colleague, I haven't seen a successful organization that doesn't take testing time into account. Addendum per your clarification: Even if devs write automated tests, particularly unit tests (integration tests do better), they are insufficient to properly test. If there is QA people involved, their time need to be estimated, one way or another. Only if you are deciding to remove QA people from payroll, then their work time has effectively vanished and you can remove it from the estimation. But this would have side-effects that are easy to ignore. And you may still be missing performance, stress, security and acceptance testing. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/339884",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/259112/"
]
} |
339,966 | I have a question similar to this other question Why aren't design patterns added to the languages constructs? Why isn't there java.util.Singleton and then we inherit it? The boilerplate code seems to be always the same. class Singleton {
private static final Singleton s = new Singleton();
public static Singleton getInstance() {
return s;
}
protected Singleton() {
}
}
class XSingleton extends Singleton {
} Now if there was a Singleton built-in to Java then we wouldn't have to include the same boiler-plate over and over in projects. We could just inherit the code that makes the Singleton and just code our specific in our XSingleton that extends Singleton . I suppose the same goes for other design patterns e.g. MVC and similar. Why aren't more design pattern built into the standard libraries? | I want to challenge your basic premise, namely that Design Patterns aren't added to the standard library. For example, java.util.Iterator<E> is in the standard library and is an implementation of the Iterator Design Pattern . java.util.Observable / java.util.Observer is an implementation of the Publish/Subscribe Design Pattern . java.lang.reflect.Proxy is an implementation of the Proxy Design Pattern . Looking at other languages, e.g. Ruby has the delegate and forwardable libraries, both implementations of the Proxy Design Pattern, the observer library, an implementation of the Publish/Subscribe Pattern, and the singleton library, an implementation of the Singleton Design Pattern . | {
"source": [
"https://softwareengineering.stackexchange.com/questions/339966",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/12893/"
]
} |
340,047 | I recently started working with GitFlow model as implemented by bitbucket. And there is one thing that is not completely clear to me. We try to regularly address our technical debt by backlogging, planning, and implementing the refactoring tasks. Such refactoring branches end with pull-requests that are merged in develop . My question is where do the refactoring branches belong in GitFlow ? Using feature prefix seems the most logical, however it does not feel entirely right, because refactoring does not add any new functionality. However using bugfix prefix seems not right as well as there is no actual bug refactoring fixes. Creating a custom prefix on the other hand seems like complicating if not over-engineering the things. Did you have such situation? Which practice do you use to address this? Please explain why. | Refactoring work should go in a feature branch. The prefix "feature" is just a word to describe a discrete programming task, you could choose any word you like, any branch from development is either a "feature" branch or a "release" branch Adding a new prefix such as "refactoring" is problematic. As you will often do some refactoring when adding a feature, you are simply giving yourself a naming problem and adding confusion. ie. "some of our feature branches are called 'refactoring', no they don't contain all the refactoring work and sometimes they have bug fixes or features in them' similarly "hotfix" branches are not called hotfix because they contain hotfixes, but because they branch from master rather than develop | {
"source": [
"https://softwareengineering.stackexchange.com/questions/340047",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/259309/"
]
} |
340,494 | We have here a large legacy code base with bad code you can't imagine. We defined now some quality standards and want to get those fulfilled in either completely new codebase, but also if you touch the legacy code. And we enforce those with Sonar (code analysing tool), which has some thousands violations already. Now the discussion came up to lower those violations for the legacy. Because it's legacy. The discussion is about rules of readability. Like how many nested ifs/for.. it can have. Now how can I argue against lowering our coding quality on legacy code? | Code is just a means to an end. What that end might be varies, but typically it's profit. If it is profit; then to successfully argue anything you want to show that it will improve profit - e.g. by increasing sales, reducing maintenance costs, reducing development costs, reducing wages, reducing retraining costs, etc. If you can't/don't show that it will improve profit; then you're mostly just saying it'd be a fun way to flush money down the toilet. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/340494",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/19772/"
]
} |
340,531 | Is a multi-tenant database: A DB server that has a different (identical) database/schema for each customer/tenant?; or A DB server that has a database/schema where customers/tenants share records inside of the same tables? For instance, under Option #1 above, I might have a MySQL server at, say, mydb01.example.com , and it might have a customer1 database inside of it. This customer1 database might have, say, 10 tables that power my application for that particular customer (Customer #1). It might also have a customer2 database with the exact same 10 tables in it, but only containing data for Customer #2. It might have a customer3 database, a customer4 database, and so on. In Option #2 above, there would only be a single database/schema, say, myapp_db , again with 10 tables in it (same ones as above). But here, the data for all the customers exists inside those 10 tables, and they therefore "share" the tables. And at the application layer, logic and security control which customers have access to which records in those 10 tables, and great care is taken to ensure that Customer #1 never logs into the app and sees Customer #3's data, etc. Which of these paradigms constitutes a traditional "multi-tenant" DB? And if neither, then can someone provide me an example (using the scenarios described above) of what a multi-tenant DB is? | Which of these paradigms constitutes a traditional "multi-tenant" DB Both concepts are called multi-tenancy, since it is just a logical concept "in which a single instance of software runs on a server and serves multiple tenants" (from Wikipedia ). But how you implement this concept "physically" is up to you. Of course, the application needs a database concept which allows to separate the data of different tenants, and the idea of multi-tenancy is to have some server resources shared (at least the hardware) for better utilization of the resources, and easier administration. So a "multi tenant DB" is one which supports this directly , where parts of the db model or tables are shared. To be precise, it is possible to build a multi-tenant application with a non-multi tenant DB, providing an individual DB instance per client. However, this precludes to share any DB resources directly between tenants, and the application layer has to make sure to connect the right tenant to the right database. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/340531",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/154753/"
]
} |
340,550 | I am doing database programming using Java with SQLite. I have found that only one connection at a time to the database has write capabilities, while many connections at once have read capability. Why was the architecture of SQLite designed like this? As long as the two things that are being written are not being written to the same place in the database, why can't two writes occur at once? | Because "multiple concurrent writes" is much, much harder to accomplish in the core database engine than single-writer, multiple-reader. It's beyond SQLite's design parameters, and including it would likely subvert SQLite's delightfully small size and simplicity. Supporting high degrees of write concurrency is a hallmark of large database engines such as DB2, Oracle, SQL Server, MySQL, PostgreSQL, NonStop SQL, and Sybase. But it's technically hard to accomplish, requiring extensive concurrency control and optimization strategies such as database, table, and row locking or, in more modern implementations, multi-version concurrency control . The research on this problem/requirement is voluminous and goes back decades . SQLite has a very different design philosophy from most of those server-centric DBMSs that support multiple writers. It's designed to bring the power of SQL and the relational model to individual applications, and indeed to be embeddable within each application. That goal requires significant tradeoffs. Not adding the significant infrastructure and overhead needed to handle multiple concurrent writers is one of those. The philosophy can be summed up by a statement on SQLite's appropriate uses page: SQLite does not compete with client/server databases. SQLite competes with fopen(). | {
"source": [
"https://softwareengineering.stackexchange.com/questions/340550",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/258776/"
]
} |
340,705 | We had two major dependency-related crises with two different code bases (Android, and a Node.js web app). The Android repo needed to migrate from Flurry to Firebase, which required updating the Google Play Services library four major versions. A similar thing happened with our Heroku-hosted Node app where our production stack (cedar) was deprecated and needed to be upgraded to cedar-14. Our PostgreSQL database also needed to update from 9.2 to 9.6. Each of these apps' dependencies sat stale for almost two years, and when some were deprecated and we reached the 'sunset' period, it has been a major headache to update them, or replace them. I've spent over 30 hours over the past month or two slowly resolving all of the conflicts and broken code. Obviously letting things sit for two years is far too long. Technology moves quickly, especially when you're using a platform provider like Heroku. Let's assume that we have a full-fledged test suite, and a CI process like Travis CI, which takes a lot of the guesswork out of updating. E.g. if a function was removed after an upgrade, and you were using it, your tests would fail. How often should dependencies be updated, or when should dependencies be updated? We updated because we were forced to, but it seems that some kind of pre-emptive approach would be better. Should we update when minor versions are released? Major versions? Every month if updates are available? I want to avoid a situation like what I just experienced at all costs. PS - for one of my personal Rails projects, I use a service called Gemnasium which tracks your dependencies so that you can be notified of e.g. security vulnerabilities. It's a great service, but we would have to manually check dependencies for the projects I mentioned. | You should generally upgrade dependencies when: It's required There's an advantage to do so Not doing so is disadvantageous (These are not mutually exclusive.) Motivation 1 ("when you have to") is the most urgent driver. Some component or platform on which you depend (e.g. Heroku) demands it, and you have to fall in line. Required upgrades often cascade out of other choices; you decide to upgrade to PostgreSQL version such-and-so. Now you have to update your drivers, your ORM version, etc. Upgrading because you or your team perceives an advantage in doing so is softer and more optional. More of a judgment call: "Is the new feature, ability, performance, ... worth the effort and dislocation bringing it in will cause?" In Olden Times, there was a strong bias against optional upgrades. They were manual and hard, there weren't good ways to try them out in a sandbox or virtual environment, or to roll the update back if it didn't work out, and there weren't fast automated tests to confirm that updates hadn't "upset the apple cart." Nowadays the bias is toward much faster, more aggressive update cycles. Agile methods love trying things; automated installers, dependency managers, and repos make the install process fast and often almost invisible; virtual environments and ubiquitous version control make branches, forks, and rollbacks easy; and automated testing let us try an update then easily and substantial evaluate "Did it work? Did it screw anything up?" The bias has shifted wholesale, from "if it ain't broke, don't fix it" to the "update early, update often" mode of continuous integration and even continuous delivery . Motivation 3 is the softest. User stories don't concern themselves with "the plumbing" and never mention "and keep the infrastructure no more than N releases behind the current one." The disadvantages of version drift (roughly, the technical debt associated with falling behind the curve) encroach silently, then often announce themselves via breakage. "Sorry, that API is no longer supported!" Even within Agile teams it can be hard to motivate incrementalism and "staying on top of" the freshness of components when it's not seen as pivotal to completing a given sprint or release. If no one advocates for updates, they can go untended. That wheel may not squeak until it's ready to break, or even until has broken. From a practical perspective, your team needs to pay more attention to the version drift problem. 2 years is too long. There is no magic. It's just a matter of "pay me now or pay me later." Either address the version drift problem incrementally, or suffer and then get over bigger jolts every few years. I prefer incrementalism, because some of the platform jolts are enormous. A key API or platform you depend on no longer working can really ruin your day, week, or month. I like to evaluate component freshness at least 1-2 times per year. You can schedule reviews explicitly, or let them be organically triggered by the relatively metronomic, usually annual update cycles of major components like Python, PostgreSQL, and node.js. If component updates don't trigger your team very strongly, freshness checks on major releases, at natural project plateaus, or every k releases can also work. Whatever puts attention to correcting version drift on a more regular cadence. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/340705",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/109112/"
]
} |
340,724 | Assuming an IReader interface, an implementation of the IReader interface ReaderImplementation, and a class ReaderConsumer that consumes and processes data from the reader. public interface IReader
{
object Read()
} Implementation public class ReaderImplementation
{
...
public object Read()
{
...
}
} Consumer: public class ReaderConsumer()
{
public string location
// constructor
public ReaderConsumer()
{
...
}
// read some data
public object ReadData()
{
IReader reader = new ReaderImplementation(this.location)
data = reader.Read()
...
return processedData
}
} For testing ReaderConsumer and the processing I use a mock of IReader. So ReaderConsumer becomes: public class ReaderConsumer()
{
private IReader reader = null
public string location
// constructor
public ReaderConsumer()
{
...
}
// mock constructor
public ReaderConsumer(IReader reader)
{
this.reader = reader
}
// read some data
public object ReadData()
{
try
{
if(this.reader == null)
{
this.reader = new ReaderImplementation(this.location)
}
data = reader.Read()
...
return processedData
}
finally
{
this.reader = null
}
}
} In this solution mocking introduces an if sentence for the production code since only the mocking constructor supplies an instances of the interface. During writing this I realise that the try-finally block is somewhat unrelated since it is there to handle the user changing the location during application run time. Overall it feels smelly, how might it be handled better? | Instead of initializing the reader from your method, move this line {
this.reader = new ReaderImplementation(this.location)
} Into the default parameterless constructor. public ReaderConsumer()
{
this.reader = new ReaderImplementation(this.location)
}
public ReaderConsumer(IReader reader)
{
this.reader = reader
} There is no such thing as a "mock constructor", if your class has a dependency that it requires in order to work, then the constructor should either be provided that thing, or create it. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/340724",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/260284/"
]
} |
340,803 | I'm developing a physics simulation, and as I'm rather new to programming, I keep running into problems when producing large programs (memory issues mainly). I know about dynamic memory allocation and deletion (new / delete, etc), but I need a better approach to how I structure the program. Let's say I'm simulating an experiment which is running for a few days, with a very large sampling rate. I'd need to simulate a billion samples, and run over them. As a super-simplified version, we'll say a program takes voltages V[i], and sums them in fives: i.e. NewV[0] = V[0] + V[1] + V[2] + V[3] + V[4] then NewV[1] = V[1] + V[2] + V[3] + V[4] + V[5] then NewV[2] = V[2] + V[3] + V[4] + V[5] + V[6]
...and this goes on for a billion samples. In the end, I'd have V[0], V[1], ..., V[1000000000], when instead the only ones I'd need to store for the next step are the last 5 V[i]s. How would I delete / deallocate part of the array so that the memory is free to use again (say V[0] after the first part of the example where it is no longer needed)? Are there alternatives to how to structure such a program? I've heard about malloc / free, but heard that they should not be used in C++ and that there are better alternatives. Thanks very much! tldr; what to do with parts of arrays (individual elements) I don't need anymore that are taking up a huge amount of memory? | What you describe, "smoothing by fives", is a finite impulse response (FIR) digital filter. Such filters are implemented with circular buffers. You keep only the last N values, you keep an index into the buffer that tells you where the oldest value is, you overwrite the current oldest value with the newest one at each step, and you step the index, circularly, each time. You keep your collected data, that you are going to crunch down, on disk. Depending on your environment, this may be one of those places where you're better off getting experienced help. At a university, you put a note up on the bulletin board in the Computer Science Department, offering student wages (or even student consulting rates) for a few hours of work, to help you crunch your data. Or maybe you offer Undergraduate Research Opportunity points. Or something. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/340803",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/260412/"
]
} |
340,821 | I am designing a delivery app where a Requester requests a thing from his mobile and the request is sent to multiple Agents on the field. When one Agent accepts the request, he will be assigned to the Requester and is expected to make his delivery. I want a few Suggestions for Architecting this system. I am using the .Net Stack. Here are a few challenges: 1) At any point in time, only 5 agents will be notified. In case none of them accept the request, the request will be sent to 5 more. The agents who were notified for a particular request will be recorded in the database. In case an agent accepts a request, the database will be updated. In case after 10 seconds none of the agents accept the request, the windows service will send the request to next 5 agents. Question: Will it be right to keep scanning the database with queries every 5 seconds using a windows service to see if the request has been accepted? The System has to be designed for heavy load. 2) The Requester can schedule a request. Question: Will it be right to keep scanning the database with queries every 5 seconds to see if the scheduled time has approached? I am using ASP.Net WEB API to receive info from the mobile, FCM to send notifications to the mobile and windows sevice to acomplish the above 2 tasks. Please recommend an optimal approach, architecture, and technologies to be used to accomplish this. | What you describe, "smoothing by fives", is a finite impulse response (FIR) digital filter. Such filters are implemented with circular buffers. You keep only the last N values, you keep an index into the buffer that tells you where the oldest value is, you overwrite the current oldest value with the newest one at each step, and you step the index, circularly, each time. You keep your collected data, that you are going to crunch down, on disk. Depending on your environment, this may be one of those places where you're better off getting experienced help. At a university, you put a note up on the bulletin board in the Computer Science Department, offering student wages (or even student consulting rates) for a few hours of work, to help you crunch your data. Or maybe you offer Undergraduate Research Opportunity points. Or something. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/340821",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/260436/"
]
} |
340,913 | Since Git is licensed under GPLv2, and, to my understanding, GitHub interacts with Git, shouldn't the whole GitHub codebase be open-sourced in a GPL-compatible license? | 3 reasons why: According to the terms of the GPL, people accessing GitHub via the web is not considered releasing (or propagating in GPLv3 terms), and so GitHub is not required to share their source code. If GitHub was to sell a version of their service (which they might do, I haven't bothered to look) where they send you their software and you run an instance of GitHub internally on your own network, then they might be required to also ship the source code, unless: GitHub may very well be accessing the Git client through command-line invocations, in which case that is considered communicating "at arms-length" , and thus does not make GitHub a derivative work and therefore not subject to the requirements of the GPL. Additionally, GitHub may very well not even be using the Git software and has written their own core "git implementation" and has re-implemented its interfaces to maintain compatibility, in which case again the GPL's requirements would not come into play. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/340913",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/179071/"
]
} |
341,066 | I have recently mastered the Lambda expression that was introduced in java 8. I find that whenever I am using a functional interface, I tend to always use a Lambda expression instead of creating a class that implements the functional interface. Is this considered good practice? Or are their situations where using a Lambda for a functional interface is not appropriate? | There are a number of criteria that should make you consider not using a lambda: Size The larger a lambda gets, the more difficult it makes it to follow the logic surrounding it. Repetition It's better to create a named function for repeated logic, although it's okay to repeat very simple lambdas that are spread apart. Naming If you can think of a great semantic name, you should use that instead, as it adds a lot of clarity to your code. I'm not talking names like priceIsOver100 . x -> x.price > 100 is just as clear as that name. I mean names like isEligibleVoter that replace a long list of conditions. Nesting Nested lambdas are really, really hard to read. Don't go overboard. Remember, software is easily changed. When in doubt, write it both ways and see which is easier to read. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/341066",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/258776/"
]
} |
341,138 | What is the best course of action in TDD if, after implementing the logic correctly, the test still fails (because there is a mistake in the test)? For example, suppose you would like to develop the following function: int add(int a, int b) {
return a + b;
} Suppose we develop it in the following steps: Write test (no function yet): // test1
Assert.assertEquals(5, add(2, 3)); Results in compilation error. Write a dummy function implementation: int add(int a, int b) {
return 5;
} Result: test1 passes. Add another test case: // test2 -- notice the wrong expected value (should be 11)!
Assert.assertEquals(12, add(5, 6)); Result: test2 fails, test1 still passes. Write real implementation: int add(int a, int b) {
return a + b;
} Result: test1 still passes, test2 still fails (since 11 != 12 ). In this particular case: would it be better to: correct test2 , and see that it now passes, or delete the new portion of implementation (i.e. go back to step #2 above), correct test2 and let it fail, and then reintroduce the correct implementation (step #4. above). Or is there some other, cleverer way? While I understand that the example problem is rather trivial, I'm interested in what to do in the generic case, which might be more complex than the addition of two numbers. EDIT (In response to the answer of @Thomas Junk): The focus of this question is what TDD suggests in such a case, not what is "the universal best practice" for achieving good code or tests (which might be different than the TDD-way). | The absolutely critical thing is that you see the test both pass and fail. Whether you delete the code to make the test fail then rewrite the code or sneak it off to the clipboard only to paste it back later doesn't matter. TDD never said you had to retype anything. It wants to know the test passes only when it should pass and fails only when it should fail. Seeing the test both pass and fail is how you test the test. Never trust a test you've never seen do both. Refactoring Against The Red Bar gives us formal steps for refactoring a working test: Run the test Note the green bar Break the code being tested Run the test Note the red bar Refactor the test Run the test Note the red bar Un-break the code being tested Run the test Note the green bar However, we aren't refactoring a working test. We have to transform a buggy test. One concern is code that was introduced while only this test covered it. Such code should be rolled back and reintroduced once the test is fixed. If that isn't the case, and code coverage isn't a concern due to other tests covering the code, you can transform the test and introduce it as a green test. Here, code is also being rolled back but just enough to cause the test to fail. If that's not enough to cover all the code introduced while only covered by the buggy test we need a bigger code roll back and more tests. Introduce a green test Run the test Note the green bar Break the code being tested Run the test Note the red bar Un-break the code being tested Run the test Note the green bar Breaking the code can be commenting out code or moving it elsewhere only to paste it back later. This shows us the scope of code the test covers. For these last two runs you're right back into the normal red green cycle. You're just pasting instead of typing to un-break the code and make the test pass. So be sure you're pasting only enough to make the test pass. The overall pattern here is to see the color of the test change the way we expect. Note that this creates a situation where you briefly have an un-trusted green test. Be careful about getting interrupted and forgetting where you are in these steps. My thanks to RubberDuck for the Embracing the Red Bar link. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/341138",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/219749/"
]
} |
341,144 | For simplicity's sake, let's say I'm maintaining a fork of Bootstrap. My changes are specific to my theme, so they'll never be merged back into the original Bootstrap project, but I still want to make sure that my project has the latest changes. Over the life of the project, I'll merge in Bootstrap's changes several times. What's the appropriate way to add in Bootstrap's changes to my fork knowing that I'll be doing so multiple times throughout the life of the project? Would constantly rebasing create issues later down the line, or would that always be the most appropriate option (assuming I want 100% of the changes in Bootstrap). How would I approach a situation where I only wanted some of the changes? Would this prevent me from rebasing in the future? | The absolutely critical thing is that you see the test both pass and fail. Whether you delete the code to make the test fail then rewrite the code or sneak it off to the clipboard only to paste it back later doesn't matter. TDD never said you had to retype anything. It wants to know the test passes only when it should pass and fails only when it should fail. Seeing the test both pass and fail is how you test the test. Never trust a test you've never seen do both. Refactoring Against The Red Bar gives us formal steps for refactoring a working test: Run the test Note the green bar Break the code being tested Run the test Note the red bar Refactor the test Run the test Note the red bar Un-break the code being tested Run the test Note the green bar However, we aren't refactoring a working test. We have to transform a buggy test. One concern is code that was introduced while only this test covered it. Such code should be rolled back and reintroduced once the test is fixed. If that isn't the case, and code coverage isn't a concern due to other tests covering the code, you can transform the test and introduce it as a green test. Here, code is also being rolled back but just enough to cause the test to fail. If that's not enough to cover all the code introduced while only covered by the buggy test we need a bigger code roll back and more tests. Introduce a green test Run the test Note the green bar Break the code being tested Run the test Note the red bar Un-break the code being tested Run the test Note the green bar Breaking the code can be commenting out code or moving it elsewhere only to paste it back later. This shows us the scope of code the test covers. For these last two runs you're right back into the normal red green cycle. You're just pasting instead of typing to un-break the code and make the test pass. So be sure you're pasting only enough to make the test pass. The overall pattern here is to see the color of the test change the way we expect. Note that this creates a situation where you briefly have an un-trusted green test. Be careful about getting interrupted and forgetting where you are in these steps. My thanks to RubberDuck for the Embracing the Red Bar link. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/341144",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/260876/"
]
} |
341,179 | I just realized that in Python, if one writes for i in a:
i += 1 The elements of the original list a will actually not be affect at all, since the variable i turns out to just be a copy of the original element in a . In order to modify the original element, for index, i in enumerate(a):
a[index] += 1 would be needed. I was really surprised by this behavior. This seems to be very counterintuitive, seemingly different from other languages and has resulted in errors in my code that I had to debug for a long while today. I've read Python Tutorial before. Just to be sure, I checked the book again just now, and it doesn't even mention this behavior at all. What is the reasoning behind this design? Is it expected to be a standard practice in a lot of languages so that the tutorial believes that the readers should get it naturally? In what other languages is the same behavior on iteration present, that I should pay attention to in the future? | I already answered a similar question lately and it's very important to realize that += can have different meanings: If the data type implements in-place addition (i.e. has a correctly working __iadd__ function) then the data that i refers to is updated (doesn't matter if it's in a list or somewhere else). If the data type doesn't implement an __iadd__ method the i += x statement is just syntactic sugar for i = i + x , so a new value is created and assigned to the variable name i . If the data type implements __iadd__ but it does something weird. It could be possible that it's updated ... or not - that depends on what is implemented there. Pythons integers, floats, strings don't implement __iadd__ so these will not be updated in-place. However other data types like numpy.array or list s implement it and will behave like you expected. So it's not a matter of copy or no-copy when iterating (normally it doesn't do copies for list s and tuple s - but that as well depends on the implementation of the containers __iter__ and __getitem__ method!) - it's more a matter of the data type you have stored in your a . | {
"source": [
"https://softwareengineering.stackexchange.com/questions/341179",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/106675/"
]
} |
341,576 | I am working at a robotics startup on a path coverage team and after submitting a pull request, my code gets reviewed. My teammate, who has been on the team for more than a year, has made some comments to my code that suggest I do a lot more work than I believe to be necessary. No, I am not a lazy developer. I love elegant code that has good comments, variable names, indentation and handles cases properly. However, he has a different type of organization in mind that I don't agree with. I'll provide an example: I had spent a day writing test cases for a change to a transition finding algorithm that I made. He had suggested that I handle an obscure case that is extremely unlikely to happen--in fact I'm not sure it is even possible for it to occur. The code that I made already works in all of our original test cases and some new ones that I found. The code that I made already passes our 300+ simulations that are run nightly. However, to handle this obscure case would take me 13 hours that could better be spent trying to improve the performance of the robot. To be clear, the previous algorithm that we had been using up until now also did not handle this obscure case and not once, in the 40k reports that have been generated, has it ever occurred. We're a startup and need to develop the product. I have never had a code review before and I'm not sure if I'm being too argumentative; should I just be quiet and do what he says? I decided to keep my head down and just make the change even though I strongly disagree that it was a good use of time. I respect my co-worker and I acknowledge him as an intelligent programmer. I just disagree with him on a point and don't know how to handle disagreement in a code review. I feel that the answer I chose meets this criteria of explaining how a junior developer can handle disagreement in a code review. | an obscure case that is extremely unlikely to happen--in fact I'm not sure it is even possible to occur Not having untested behaviors in code can be very important. If a piece of code is run e.g. 50 times a second, a one in a million chance will happen approximately every 5.5 hours of runtime. (In your case, the odds seem lower.) You may talk about the priorities with your manager (or whoever is a more senior person in charge for the unit you work in). You will better understand whether e.g. working on code performance or code being bullet-proof is the top priority, and how improbable that corner case may be. Your reviewer may also have a skewed idea of priorities. Having talked with the person in charge, you'll have an easier time (dis)agreeing with your reviewer suggestions, and will have something to refer to. It is always a good idea to have more than one reviewer. If your code is only reviewed by one colleague, ask someone else who knows that code, or the codebase in general, to take a look. A second opinion, again, will help you to more easily (dis)agree with the reviewer's suggestions. Having a number of recurring comments during several code reviews usually points to a bigger thing not being clearly communicated, and the same issues crop up again and again. Try to find out that bigger thing, and discuss it directly with the reviewer. Ask enough why questions. It helped me a lot when I started the practice of code reviews. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/341576",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/197476/"
]
} |
341,732 | I'm at a bit of a crossroads with some API design for a client (JS in a browser) to talk to a server. We use HTTP 409 Conflict to represent the failing of an action because of a safety lock in effect. The satefy lock prevents devs from accidentally making changes in our customers' production systems. I've been tasked with handling 409s a bit more gracefully on the client to indicate why a particular API call failed. My solution was to wrap the failure handlers of any of our AJAX calls which will display a notification on the client when something fails due to 409 - this is all fine and works well alongside other 4XX and 5XX errors which use the same mechanism. A problem has arisen where one of our route handlers responds with 409s when encountering a business logic error - my AJAX wrapper reports that the safety lock is on, whilst the client's existing failure handler reports what (it thinks) the problem is based on the body of the response. A simple solution would be to change either the handler's response or the status code we use to represent the safety lock. Which brings me to my crossroad: should HTTP status codes even be used to represent business logic errors? This question addresses the same issue I am facing but it did not gain much traction. As suggested in the linked answer, I'm leaning towards using HTTP 200 OK with an appropriate body to represent failure within the business logic. Does anyone have any strong opinions here? Is anyone able to convince me this is the wrong way to represent failure? | Kasey covers the main point. The key idea in any web api: you are adapting your domain to look like a document store. GET/PUT/POST/DELETE and so on are all ways of interacting with the document store. So a way of thinking about what codes to use, is to understand what the analogous operation is in a document store, and what this failure would look like in that analog. 2xx is completely unsuitable The 2xx (Successful) class of status code indicates that the client's
request was successfully received, understood, and accepted. 5xx is also unsuitable The 5xx (Server Error) class of status code indicates that the server is aware that it has erred In this case, the server didn't make a mistake; it's aware that you aren't supposed to modify that resource that way at this time. Business logic errors (meaning that the business invariant doesn't allow the proposed edit at this time) are probably a 409 The 409 (Conflict) status code indicates that the request could not
be completed due to a conflict with the current state of the target
resource. This code is used in situations where the user might be
able to resolve the conflict and resubmit the request. The server
SHOULD generate a payload that includes enough information for a user
to recognize the source of the conflict. Note this last bit -- the payload of the 409 response should be communicating information to the consumer about what has gone wrong, and ideally includes hypermedia controls that lead the consumer to the resources that can help to resolve the conflict. My solution was to wrap the failure handlers of any of our AJAX calls which will display a notification on the client when something fails due to 409 - this is all fine and works well alongside other 4XX and 5XX errors which use the same mechanism. And I would point to this as the problem; your implementation at the client assumed that the status code was sufficient to define the problem. Instead, your client code should be reviewing the payload, and acting on the information available there. That is, after all, how a document store would do it 409 Conflict
your proposed change has been declined because ${REASON}.
The following resolution protocols are available: ${LINKS[@]}) The same approach with a 400 Bad Request would also be acceptable; which roughly translates to "There was a problem with your request. We can't be bothered to figure out which status code is the best fit, so here you go. See the payload for details." I would use 422. Input is valid so 400 is not the right error code to use The WebDAV specification includes this recommendation The 422 (Unprocessable Entity) status code means the server understands the content type of the request entity (hence a 415 (Unsupported Media Type) status code is inappropriate), and the syntax of the request entity is correct (thus a 400 (Bad Request) status code is inappropriate) but was unable to process the contained instructions. For example, this error condition may occur if an XML request body contains well-formed (i.e., syntactically correct), but semantically erroneous, XML instructions. (Update: this is now part of the HTTP Semantics specification .) I don't believe that's quite a match (although I agree that it sheds some doubt on 400 as an alternative). My interpretation is that 422 means "you've sent the wrong entity" where 409 is "you've sent the entity at the wrong time". Put another way, 422 indicates an issue with the request message considered in isolation, where 409 indicates that the request message conflicts with the current state of the resource. Ben Nadal's discussion of 422 may be useful to consider. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/341732",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/62557/"
]
} |
341,841 | I work in a manufacturing plant that has tasked IT with creating a shop floor scheduling program (that is very badly needed). Based on others experience, would it be better to take less time and build out a basic framework that is usable and then build upon that by adding features or start out by creating a fully implemented solution right out of the gate. I've only been a developer for about a year and don't have very much experience with initial creation of apps of this size. I've been leaning towards the idea that a barebones app is the way to go first due to the extreme need for some type of digital schedule but am concerned that adding random features after the fact could get a little messy. If you were in the same situation what path would you lean towards? | Experience definitely leads toward building something small and simple, and getting it to the users as early as possible. Add features and capabilities as they're requested by the users. Chances are very good (bordering on certain) that what they want/ask for won't resemble what you would have built on your own very much (if at all). As far as things getting messy as you add to your original application: well, this is why Agile (and most similar methodologies) place a strong emphasis on testing and refactoring. Refactoring means cleaning up the code as you make changes, and a solid test suite (that you run every time you make changes) ensures that if/when you introduce bugs you know about them (almost) immediately, so that when you release something to your users you have a reasonable assurance that it actually works. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/341841",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/182466/"
]
} |
341,872 | We are developing a Rest API for eCommerce website which will be consumed by mobile apps. In the home page of an app we need to call multiple resources like Sliders, Top Brands, Best Selling Products, Trending Products etc. Two options to make API calls: Single Call: www.example.com/api/GetAllInHome Multiple Calls: www.example.com/api/GetSliders
www.example.com/api/GetTopBrands
www.example.com/api/GetBestSellingProducts
www.example.com/api/GetTrendingProducts Which is the best approach for rest api design - single or multiple call, explain the pros and cons? Which will take more time to respond to the request? | In Theory the multiple simultaneous calls are more flexible and just as fast. However, in practice if you load a page, and then load each part of that page, displaying loading spinners all over it until you get the results back the result is slow and disjointed. For this reason AJAX requests for data should be used sparingly and only when you have a section of the page which is slow to load or needs to be refreshed on a different cycle to the rest of the page. Say a master/detail display, where you want to select an option from the master and display the corresponding detail without reloading the master. A common design is to keep the separate APIs for coding flexibility and micro-service concerns, but combine the data server side in the website. so that the client needs to make only a single call to its own website. The API calls with appropriate caching should be fast within the data center. Also, consider having NO client API calls at all. simply generate the HTML server side. Although javascript single page app frameworks push you down the api route. It's usually not the optimal approach for high volume e-commerce sites. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/341872",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/164824/"
]
} |
341,921 | Our colleague promotes writing unit tests as actually helping us to refine our design and refactor things, but I do not see how. If I am loading a CSV file and parse it, how is unit test (validating the values in the fields) gonna help me to verify my design? He mentioned coupling and modularity etc. but to me it does not make much sense - but I do not have much theoretical background, though. That is not the same as the question you marked as duplicate, I would be interested in actual examples how this helps, not just theory saying "it helps". I like the answer below and the comment but I would like to learn more. | The great thing about unit tests is they allow you to use your code how other programmers will use your code. If your code is awkward to unit test, then it's probably going to be awkward to use. If you can't inject dependencies without jumping through hoops, then your code is probably going to be inflexible to use. And if you need to spend a lot of time setting up data or figuring out what order to do things in, your code under test probably has too much coupling and is going to be a pain to work with. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/341921",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/262087/"
]
} |
342,141 | Recently I started working on a project where a very old monolithic application is being migrated into microservice-based architecture. The legacy codebase is very messy ('spaghetti code') and often an apparently-simple function (e.g named as "multiplyValueByTen") later reveals itself as "thousands of lines of validation code involving 10 tables across 3 different schemas". Now my boss is (rightly) asking me to estimate how long would it take to write feature X in the new architecture. But I'm having difficulties coming up with a realistic estimation; often I hugely underestimate the task due to reasons I've stated above and embarrass myself because I can't finish in time. The sensible thing might seem to really get into the code, note every branch and calls to other functions and then estimate the time cost. But there is really a minuscule difference between documenting the old code and actually writing down the new version. How should I approach a scenario like this? While I perfectly understand how legacy code refactoring works, my question is not about "how to do refactor/rewrite?" but about giving a realistic answer to "how long would it take to refactor/rewrite part X?" | Read Bob Martin's "Clean Coder" (and "Clean Code" while you're at it). The following is from memory but I strongly suggest you buy your own copy. What you need to do is a three point weighted average. You do three estimates for each piece of work: a best case scenario - assuming everything goes right (a) a worst case scenario - assuming everything goes wrong (b) the actual guess - what you think it probably will take (c) Your estimate is then (a+b+2c)/4 No it won't be accurate. There are better ways of estimating but this method is quick, easy to understand and mitigates optimism by making you consider the worst case. Yes you will have to explain to your manager that you are unfamiliar with the code and that it is too unpredictable for you to make firm, accurate estimates without spending a long time investigating the code each time to improve the estimate (offer to do this but say you need n days just to give a firm estimate of how many more days it will take). If you are a "JuniorDev" this should be acceptable for a reasonable manager. You should also explain to your manager that your estimates are averaged, based on best case, worst case and probable case and give them your figures which also gives them the error bars. Do NOT negotiate on an estimate - if your manager tries to use the best case for every estimate (they are a fool - but I've met some like that) and then bully / motivate you into trying to hit the deadline, well, they're going to be disappointed sometimes. Keep explaining the rationale behind the estimates, (best case, worst case and probable case) and keep getting close to the weighted average most times and you should be OK. Also, for your own purposes, keep a spreadsheet of your estimates and add your actuals when you've finished. That should give you a better idea of how to adjust your estimates. Edit: My assumptions when I answered this: The OP is a Junior Developer (based on the chosen username). Any advice given is not therefore from the perspective of a Project Manager or Team Lead who may be expected to be able to carry out more sophisticated estimates depending on the maturity of the development environment. The Project Manager has created a Project plan consisting of a fairly large number of tasks planned to take several months to deliver. The OP is being asked to provide a number of estimates for the tasks they are assigned to by their Project Manager who wants a reasonably accurate number (not a probability curve :)) to feed into the project plan and use to track progress. OP does not have weeks to produce each estimate and has been burned before by giving over-optimistic estimates and wants a more accurate method than sticking a finger in the air and saying "2 weeks, unless the code is particularly arcane in which case 2 months or more". The three point weighted average works well in this case. It's quick, comprehensible to the non-technical and over several estimates should average out to something approaching accuracy. Especially if OP takes my advice about keeping records of estimates and actuals. When you know what a real-world "Worst case" and "Best case" look like you can feed the actuals into your future estimates and even adjust the estimates for your project manager if the worst case is worse than you thought. Let's do a worked example: Best case, from experience the fastest I've done a really straightforward one was a week start to finish (5 days) Worst case, from experience, there was that time that there were links everywhere and it ended up taking me 6 weeks (30 days) Actual Estimate, it'll probably take me 2 weeks (10 days) 5+30+2x10 = 55 55/4 = 13.75 which is what you tell your PM. Maybe you round up to 14
days. Over time, (e.g. ten tasks), it should average out. Don't be afraid to adjust the formula . Maybe half the tasks end up nightmares and only ten percent are easy; so you make the estmate a/10 + b/2 + 2c/5. Learn from your experience. Note, I am not making any assumptions about the quality of the PM. A bad PM will give a short estimate to the project board to get approval and then bully the project team to try and reach the unrealistic deadline they've committed to. The only defense is to keep a record so you can be seen giving your estimates and getting close to them. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/342141",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/201919/"
]
} |
342,374 | I am a Python programmer primarily who uses pylint for linting source code. I am able to eliminate all of the warnings except one: Invalid name for a constant. Changing the name to all caps fixes it, but am I really supposed to do that? If I do it, I find that my code looks ugly as most of the variables are constant (according to pylint). | You are probably writing code like this: notes_director = argv[1]
chdir(notes_director)
files = glob('*.txt')
rand_file = choice(files)
with open(rand_file) as notes_file:
points = notes_file.readlines()
rand_point = choice(points) You should move this code into a function: def main():
notes_director = argv[1]
chdir(notes_director)
files = glob('*.txt')
rand_file = choice(files)
with open(rand_file) as notes_file:
points = notes_file.readlines()
rand_point = choice(points)
# actually call the main function
main() Pylint assumes that code that actually does the work will be inside a function. Because you have this code at the top level of your code instead of inside a function it gets confused. Generally speaking, it is better style to do work inside a function instead of at the top level. This allows you to better organize what you are doing and facilitates reusing it. You should really only have code performing an algorithm outside of a function in a quick and dirty script. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/342374",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/262852/"
]
} |
342,444 | We are starting a new project, from scratch. About eight developers, a dozen or so subsystems, each with four or five source files. What can we do to prevent “header hell”, AKA “spaghetti headers”? One header per source file? Plus one per subsystem? Separate typdefs, stucts & enums from function prototypes? Separate subsystem internal from subsystem external stuff? Insist that every single file, whether header or source must be standalone compilable? I am not asking for a “best” way, just pointer as to what to watch out for and what could cause grief, so that we can try to avoid it. This will be a C++ project, but C info would help future readers. | Simple method: One header per source file. If you have a complete subsystem where users are not expected to know about the source files, have one header for the subsystem including all required header files. Any header file should be compilable on its own (or let's say a source file including any single header should compile). It's a pain if I found which header file contains what I want, and then I have to hunt down the other header files. A simple way to enforce this is to have every source file include it's header file first (thanks doug65536, I think I do that most of the time without even realising). Make sure you use available tools to keep compile times down - each header must be included only once, use pre-compiled headers to keep compile times down, use pre-compiled modules if possible to keep compile times further down. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/342444",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/979/"
]
} |
342,447 | Problem We have a number of different customers getting a number of different price files, each customer can then also have different styles of pricing files for example customers can have just two columns, product and price. Some customers can have up to 20-30 fields each of these fields can exist in different tables in the database. Also the number of products within the file can vary also some with only a couple of hundred some with over 10k. These files have to then be run every hour to keep stock and pricing current. On average theres about 50 different file formats for 180 pricefiles so in total its 9000 odd files to be written. Current solution My current solution for this is two tables. One table with all the products within each of the price files along with there prices in various currency's. And another table/view with all the possible product attributes. Primary key is the product code, then other fields for example short description, long description, alternative product codes etc etc. The program then loads both these tables in to memory: ConcurrentDictionary<PRODUCT_CODE, ConcurrentDictionary<COLUMN_NAME, VALUE>> I then have the pricefile definition table. The table then details the price file format for eample, price file x has column ' Product_Code', 'Price_GBP', 'Description' the program then iterates (parallel) through the products within the current pricefile and grabs the columns from the concurrent dictionary's in memory. As well as columns in the dictionary there are all 'Programmable' columns for example 'RRP - price_gbp *1.4' These just go off to a switch case to see what it needs to do with these columns. Issue My issue is that the number of product attributes is becoming very large and the number of price files and price file formats and customers wanting these files has almost 10 folded in the last couple of years since i write this system. So as you can imagine its run time is no where as quick as it was. I cant think of anyway of making this quicker, neater and 'future proof'. | Simple method: One header per source file. If you have a complete subsystem where users are not expected to know about the source files, have one header for the subsystem including all required header files. Any header file should be compilable on its own (or let's say a source file including any single header should compile). It's a pain if I found which header file contains what I want, and then I have to hunt down the other header files. A simple way to enforce this is to have every source file include it's header file first (thanks doug65536, I think I do that most of the time without even realising). Make sure you use available tools to keep compile times down - each header must be included only once, use pre-compiled headers to keep compile times down, use pre-compiled modules if possible to keep compile times further down. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/342447",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/261956/"
]
} |
342,824 | I recently completed a black-box refactoring. I am unable to check it in, because I can't work out how to test it. At a high level, I have a class whose initialization involves grabbing values from some class B. If class B is "empty", it generates some sensible defaults. I extracted this part to a method that initializes class B to those same defaults. I have yet to work out the purpose/context of either class, or how they would be used. So I can't initialize the object from an empty class B and check that it has the right values/does the right thing. My best idea is to run the original code, hardcode in the results of public methods depending on the initialized members, and test the new code against that. I can't quite articulate why I feel vaguely uncomfortable with this idea. Is there a better attack here? | You are doing fine! Creating automated regression tests is often the best thing you can do for making a component refactorable. It may be surprising, but such tests can often be written without the full understanding of what the component does internally, as long as you understand the input and output "interfaces" (in the general meaning of that word). We did this several times in the past for full-blown legacy applications, not just classes, and it often helped us to avoid breaking things we did not fully understand. However, you should have enough test data and make sure you have a firm understanding what the software does from the viewpoint of a user of that component, otherwise you risk omitting important test cases. It is IMHO a good idea to implement your automated tests before you start refactoring, not afterwards, so you can do the refactoring in small steps and verify each step. The refactoring itself should make the code more readable, so it helps you to increase your understanding of the internals bit by bit. So the order steps in this process is get understanding of the code "from outside", write regression tests, refactor, which leads to better understanding of the internals of the code | {
"source": [
"https://softwareengineering.stackexchange.com/questions/342824",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/-1/"
]
} |
342,902 | While writing a library for a large project I'm working on at work, an issue came up which required a token to be sent to an email address, and then passed back into the code where it can then be used for further use. My colleague says to just read from STDIN (using Python: code = input("Enter code: ") ) and then have a user pass it in, however to me this seems like bad practice as the library might (in this case definitely will) be used in a background task on a server. I was wondering whether or not this was considered an anti-pattern or not. | As a general guideline, libraries should be totally disconnected from the environment. That means that they shouldn't perform operations on standard streams, on specific files, or have any expectation about the environment or the context they are used. Of course, there are exceptions to this rule, but there must be a very good reason for it. In the case of using stdin , i can't find any reason (unless your library actually provides routines for reading from stdin, like std::cin from C++). Also, taking the I/O streams from a parameter rather than having them hardcoded adds so much flexibility that it's not worth not doing it. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/342902",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/223194/"
]
} |
343,357 | For example, the SysInternals tool "FileMon" from the past has a kernel-mode driver whose source code is entirely in one 4,000-line file. The same for the first ever ping program ever written (~2,000 LOC). | Using multiple files always requires additional administrative overhead. One has to setup a build script and/or makefile with separated compiling and linking stages, make sure the dependencies between the different files are managed correctly, write a "zip" script for easier distribution of the source code by email or download, and so on. Modern IDEs today typically take a lot of that burden, but I am pretty sure at the time when the first ping program was written, no such IDE was available. And for files that small as ~4000 LOC, without such an IDE which manages multiple files for you well, the trade off between the mentioned overhead and the benefits from using multiple files might let people make a decision for the single file approach. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/343357",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/264281/"
]
} |
343,404 | Due to a number of circumstances leading to a poor deployment last build cycle, I campaigned in our office to perform all future deployments with a dedicated build machine, and my boss accepted this proposal. However, instead of getting an actual machine in our office to use, we're having to share a single machine with several other groups - and the hassle of having to leave my office with all the necessary information and then walk down a flight of stairs to another office just to perform a simple build is making me wonder why I ever proposed this in the first place. The idea of having a separate build machine was, originally, to separate my own locally-written code from the code of several other developers, and to separate any hijacked files I had on my machine from deployment. It was also to resolve a growing concern I've had with our ClearCase file management system, which often refuses to let me deploy certain build activities unless I've also included another activity for which it 'has dependencies'. Now that I'm actually going forward with this process, I'm wondering if I misunderstood the entire purpose of using a build machine - and since we're only using this machine for code deployment to our Test, Staging and Production environments, and not for our personal Developer testing deployments, I'm not sure it serves any purpose at all. So, what is the actual reason for using a build machine, and have I even come close to using it correctly? | Normally you wouldn't just have a dedicated build machine, but also run a build server on that dedicated machine. A dedicated build machine merely offers the advantage of never blocking the work of a developer and deploying from a centralized machine. A build server offers much more. A build sever allows for CI (continuous integration), meaning that it will automatically build on every push to your VCS (like git), might even execute unit tests if you have them and allows for "one click deployment". Build servers can notify you per mail if builds or tests fail. They offer historic data and trends on what happened. Build servers can generally be accessed by multiple users or teams at once, by using a web gui that runs in a browser. In the Java world one of the most used build servers is Jenkins. Jenkins works perfectly fine with C++ builds as well (Since you seem to use those two languages). Jenkins calls itself automation server, since it can run all kinds of tasks that don't have to be related to programming and building. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/343404",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/123958/"
]
} |
343,447 | Summary: Why is it wrong to design a constructor only for its side effects, and then to use the constructor without ever assigning its return value to a variable? I'm working on a project that involves modeling card games. The public API for the game uses an odd pattern for instantiating Rank and Suit objects. Both use a static list to track instances. In a standard deck, there will only ever be four Suit and thirteen Rank objects. When constructing a card, Ranks and Suits are obtained with a static getRank() and getSuit() , which returns the identified item from the static list. Rank and Suit have other static methods such as removeRank() and clear() . What is odd is that there is no static add() method to add a new Rank or Suit to the static list. Instead, this behavior occurs only when a Rank or Suit constructor is invoked via new . Since instances of Rank and Suit are obtained with the static getRank() , the return value from the new constructors is completely irrelevant. Some sample code might look like this //set up the standard suits
Suit.clear(); //remove all Suit instances from the static list
new Suit("clubs", 0); //add a new instance, throw away the returned value
new Suit("diamonds", 1);
new Suit("hearts", 2);
new Suit("spades", 3);
//create a card
Deck theDeck = new Deck()
for (int i = 0; i < 4; i++) {
for (int j = 0; j < 13; j++) {
Suit theSuit = Suit.getSuit(i);
Rank theRank = Rank.getRank(j);
theDeck.add(new Card(theSuit, theRank));
}
} I've never seen code that didn't assign the object being returned by a constructor. I didn't even think this was possible in Java until I tested it. However, when the issue was raised that the static addSuit() method was missing, the response was "Objects of type Suit and Rank are created via constructors. The
intent is to mimic the behavior of the java.lang.Enum class in which
objects are created via constructors and the class maintains the
collection of such objects. Current behavior is designed as intended." Using constructors in this manner smells really wrong to me, and I want to reopen the issue, but I can't seem to find much material in support (coding guidelines, anti-patterns, other documentation). Is this in fact a valid coding pattern that I just haven't come across before? If not, what is a good argument I can make for getting the API changed? Here is an article that talks about what constructors are expected to do: We can agree that a constructor always needs to initialize new instances, and we might even agree by now that a constructor always needs initialize instances completely. Edits:
Added additional information about designer's response to issue.
Added a few articles that talk about what constructors should and should not do. | Sure, technically this can work, but it violates the so-called Principle of least astonishment , because the typical convention for how constructors are used and what they are good for is quite different. Moreover, it violates the SRP , since now the constructors do two things instead of the one they are usually supposed to do - constructing the object and adding it to a static list. In short, this is an example for fancy code, but not for good code. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/343447",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/264425/"
]
} |
343,465 | A year or two ago I saw an excellent article on OOP (Java), which showed the progression of a simple concrete logger of two or three lines of code, and a theoretical excessive thought processes by the inexperienced developer that basically said oh, I should add this in case we ever want that! By the end of the article this simple logger was a giant mess of garbage that the original developer could hardly understand himself... Is there a common term for this type of over-complication? That article (which I dearly wish I could find again) shows the concept wonderfully for an isolated case, but I've come across entire projects where the developers had essentially programmed themselves into a knot by over-use of patterns, frameworks, libraries and other issues. In its own way, this is as bad (or even worse) than legacy VB6 spaghetti apps we inherit for replacement. What I'm really looking for is to bring this up when interviewing. I want to know if someone is aware and conscious of how easy it is to fall into this with lack of architecture/pre-planning (and getting a fell for whether they seem to have the correct balance in place), but it's not really something I can find a lot of info on. | The most frequent term I heard to describe such designs is overengineering . The original meaning of that word, however, is not related to software development, and outside of software development it has probably not such a bad tone. On a more general level, Joel Spolsky gave designers who overcomplicate architectural designs the name " architecture astronauts ". However, especially for an interview, I think it is more important to know what the opposite is called, putting only things into a design which are actually needed and forget about the unhealthy "just in case" approach - this is called the YAGNI principle. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/343465",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/204829/"
]
} |
343,647 | You've found some code that looks superfluous, and the compiler doesn't notice that. What do you do to be sure (or as close to sure as you can) that deleting this code won't cause regression. Two ideas spring to mind. "Simply" use deduction based on whether or not the code looks like it should execute. However, sometimes this can be a complex, time-consuming job, slightly risky (your assessment is prone to error) for no substantial business return. Place logging in that code section and see how often it gets entered in practice. After enough executions, you should have reasonable confidence removing the code is safe. Are there any better ideas or something like a canonical approach? | In my perfect fantasy world where I have 100% unit test coverage I would just delete it, run my unit tests, and when no test turns red, I commit it. But unfortunately I have to wake up every morning and face the harsh reality where lots of code either has no unit tests or when they are there can not be trusted to really cover every possible edge case. So I would consider the risk/reward and come to the conclusion that it is simply not worth it: Reward: Code is a bit easier to maintain in the future. Risk: Break the code in some obscure edge case I wasn't thinking about, cause an incident when I least expect it, and be at fault for it because I had to satisfy my code quality OCD and make a change with no business value perceivable to any stakeholder who isn't a coder themselves. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/343647",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/191975/"
]
} |
343,710 | When transferring object through an API, as in schemaless JSON format, what is the ideal way to return non-existent string property? I know that there are different ways of doing this as in examples in the listed links below. Avoid null Return null Remove empty property I'm sure I have used null in the past but don't have a good reason to give for doing that. It seems straight forward to use null when dealing with the database. But database seems like an implementation detail that shouldn't concern the party on the other side of the API. E.g. they probably use a schemaless datastore that only store properties with values (non-null). From a code point of view, restricting string functions to work only with one type, i.e. string (not null), makes them easier to prove; avoiding null is also a reason for having Option object. So, if the code that produces request/response doesn't use null, my guess is the code on the other side of the API won't be forced to use null too. I rather like the idea of using an empty string as an easy way to avoid using null. One argument that I heard for using null and against the empty string is that empty string means the property exists. Although I understand the difference, I also wonder if it's the just implementation detail and if using either null or empty string makes any real life difference. I also wonder if an empty string is analogous to an empty array. So, which is the best way of doing it that addresses those concerns? Does it depend on the format of the object being transferred (schema/schemaless)? | TLDR; Remove null properties The first thing to bear in mind is that applications at their edges are not object-oriented (nor functional if programming in that paradigm). The JSON that you receive is not an object and should not be treated as such. It's just structured data which may (or may not) convert into an object. In general, no incoming JSON should be trusted as a business object until it is validated as such. Just the fact that it deserialized does not make it valid. Since JSON also has limited primitives compared to back-end languages, it is often worth it to make a JSON-aligned DTO for the incoming data. Then use the DTO to construct a business object (or error trying) for running the API operation. When you look at JSON as just a transmission format, it makes more sense to omit properties that are not set. It's less to send across the wire. If your back-end language does not use nulls by default, you could probably configure your deserializer to give an error. For example, my common setup for Newtonsoft.Json translates null/missing properties to/from F# option types only and will otherwise error. This gives a natural representation of which fields are optional (those with option type). As always, generalizations only get you so far. There are probably cases where a default or null property fits better. But the key is not to look at data structures at the edge of your system as business objects. Business objects should carry business guarantees (e.g. name at least 3 characters) when successfully created. But data structures pulled off the wire have no real guarantees. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/343710",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/73328/"
]
} |
344,107 | When I first started learning PHP (about 5 or 6 years ago) I learned about Ajax , and I went through "the phases": Your server returns HTML data and you put it inside a DOM's innerHTML You learn about data transfer formats such as XML (and say "oooh so THAT'S what it's used for) and then JSON. You return JSON and build your UI using vanilla JavaScript code You move to jQuery You learn about APIs, headers, HTTP status codes, REST , CORS and Bootstrap You learn SPA , and frontend frameworks ( React , Vue.js , and AngularJS ) and the JSON API standard. You receive some enterprise legacy code and upon inspecting it, find that they do what's described in step 1. As I worked with this legacy codebase, I didn't even consider that it could return HTML (I mean, we're professionals now, right?), so I had a hard time looking for the JSON endpoint that was returning the data that the Ajax calls populate. It was not until I asked "the programmer" that he told me it was returning HTML and being appended directly to the DOM with innerHTML. Of course, this was hard to accept. I started thinking of ways to refactor this into JSON endpoints, thinking about unit testing the endpoints and so on. However, this codebase has no tests. Not a single one. And it's over 200k lines. Of course one of my tasks includes proposing approaches for testing the whole thing, but at the moment we're not tackling that yet. So I'm nowhere, in a corner, wondering: if we have no tests whatsoever, so we have no particular reason to create this JSON endpoint (since it's not "reusable": it literally returns data that only fits on that part of the application, but I think this was already implied since it... returns HTML data). What exactly is wrong with doing this? | What's actually wrong with an endpoint returning HTML rather than JSON data? Nothing, really. Each application has different requirements, and it may be that your application wasn't designed to be a SPA. It may be that these beautiful frameworks that you cited (Angular, Vue, React, etc...) weren't available at development time, or weren't "approved" enterprisey thingies to be used in your organization. I'm gonna tell you this: an endpoint that returns HTML reduces your dependency on JavaScript libraries and reduces the load on the user's browser since it won't need to interpret/execute JS code to create DOM objects - the HTML is already there, it's just a matter of parsing the elements and rendering them. Of course, this means we're talking about a reasonable amount of data. 10 megabytes of HTML data isn't reasonable. But since there's nothing wrong with returning HTML, what you are losing by not using JSON/XML is basically the possibility of using your endpoint as an API. And here lies the biggest question: does it really need to be an API? Related: Is it OK to return HTML from a JSON API? | {
"source": [
"https://softwareengineering.stackexchange.com/questions/344107",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/136188/"
]
} |
344,206 | In our company, we need to do many seemingly not complicated things, like develop Mobile UI. Let's say the experienced programmers costs us 4x as much as the beginners. Both are basically able to complete the seemingly simple things in the same amount of time. The difference is, that the experienced programmers produce fewer bugs and their code is more stable etc. The beginner programmers waste a lot of time of everybody else (PM, clients, etc.). But they are significantly cheaper. The counter argument is, that it takes experienced and beginner the same amount of time to make a table in HTML. Therefore, it is luxury to hire experienced programmers to do, what beginner programmers may be able to achieve as well. Should we invest in more and better programmers or more and better PM, given that the difference between experienced and new programmer in our field can be 4x. | I have first hand experience of both theories being tried out in the real world - in the same project actually. Before I arrived, the decision had been made to hire more expensive BAs and very cheap programmers - the idea was to have good quality specifications being followed slavishly by very junior programmers. After 6+ months of the main project thrashing around I took over as development manager. Once I'd fixed a few hygiene factors, the problem of code quality remained. I had some spare budget and hired a very experienced programmer (well, more of a Solution Architect) with off-the-charts communication skills and a former life as a trainer in C# (the language the project was written in). The idea was to improve the quality of the other coders by providing mentoring and effectively free training. After a month or two it became painfully obvious that even that wasn't going to work so the original team were removed from the project and a couple more top-drawer programmers were added. They delivered the project that the original team had completely failed to deliver in 8+ months of trying in 3 one month sprints starting from scratch because the original code was irredemable. If your requirements are very basic you may be able to get away with using a very junior programmer, but the likelihood is they will cost a lot more in the long run. Sometimes "simple" requirements evolve into great complexity. If I hadn't made the hard choice to change direction, they'd probably still be working on it :) - More seriously, in this example, lack of communications and competence by the original team meant they wouldn't raise issues with the specification but would just try to do whatever they were asked to do whether it made sense architecturally or not. A more experenced and confident developer asked questions and dug in to the underlying requirements and therefore ended up producing the right solution first time. Oh, one other thing. Don't assume you can immediately hire a great programmer. There are a lot of folk out there with many years experience of mediocrity that will provide almost as bad an outcome as a junior but will cost the same as a superstar (sometimes even more). I have a very good "hit rate" but that comes with experience and I have a lot. That's the subject of a whole different conversation which is off-topic here... TL;DR
Good programmers are a bargain. The hard bit is finding them and creating an attractive enough work environment to keep them. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/344206",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/76488/"
]
} |
344,289 | I am writing a type of Queue implementation that has a TryDequeue method that uses a pattern similar to various .NET TryParse methods, where I return a boolean value if the action succeeded, and use an out parameter to return the actual dequeued value. public bool TryDequeue(out Message message) => _innerQueue.TryDequeue(out message); Now, I like to avoid out params whenever I can. C# 7 gives us out variable delcarations to make working with them easier, but I still consider out params more of a necessary evil than a useful tool. The behaviour I want from this method is as follows: If there is an item to dequeue, return it. If there are no items to dequeue (the queue is empty), provide the caller with enough information to act appropriately. Don't just return a null item if there are no items left. Don't throw an exception if trying to dequeue from an empty queue. Right now, a caller of this method would almost always use a pattern like the following (using C#7 out variable syntax): if (myMessageQueue.TryDequeue(out Message dequeued))
MyMessagingClass.SendMessage(dequeued)
else
Console.WriteLine("No messages!"); // do other stuff Which isn't the worst, all told. But I can't help but feel there might be nicer ways to do this (I'm totally willing to concede that there might not be). I hate how the caller has to break up it's flow with a conditional when all it wants to is get a value if one exists. What are some other patterns that exist to accomplish this same "try" behaviour? For context, this method may potentially be called in VB projects, so bonus points for something that works nice in both. This fact should carry very little weight, though. | Use an Option type, which is to say an object of a type that has two versions, typically called either "Some" (when a value is present) or "None" (when there isn't a value) ... Or occasionally they're called Just and Nothing. There are then functions in these types that let you access the value if it is present, test for presence, and most importantly a method that lets you apply a function that returns a further Option to the value if it's present (typically in C# this should be called FlatMap although in other languages it is often called Bind instead... The name is critical in C# because having s method of this name and type let's you use your Option objects in LINQ statements). Additional features may include methods like IfPresent and IfNotPresent to invoke actions in the relevant conditions, and OrElse (which substitutes a default value when no value is present but is a no op otherwise), and so on. Your example might then look something like: myMessageQueue.TryDeque()
.IfPresent( dequeued => MyMessagingClass.SendMessage(dequeued))
.IfNotPresent (() => Console.WriteLine("No messages!") This is the Option (or Maybe) monad pattern, and it is extremely useful. There are existing implementations (eg https://github.com/nlkl/Optional/blob/master/README.md ) but it isn't hard to to your own either. (You may wish to extend this pattern so that you return a description of the error cause instead of nothing when the method fails ... This is entirely achievable and is often called the Either monad; as the name implies you can use the same FlatMap pattern to make it easy to work with in that case, too) | {
"source": [
"https://softwareengineering.stackexchange.com/questions/344289",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/265740/"
]
} |
344,411 | I am an amateur programer in a CS class trying to learn proper programming skills.
This is how my code looks, the edges of it extends to 103 columns. int extractMessage(char keyWord[25], char cipherText[17424],
int rowSize, char message[388])
{
int keyColumn = 0;
int cipherColumn = 0;
int offset = 1;
int nextWord = 1;
int lengthOfWord = 0;
int lengthOfCipher = 0;
lengthOfWord = length(keyWord);
lengthOfCipher = length(cipherText);
while (keyWord[keyColumn] != cipherText[cipherColumn]) {
cipherColumn++;
if (keyWord[keyColumn + offset] != cipherText[cipherColumn + (rowSize*nextWord) + nextWord]) {
cipherColumn++;
continue;
}
} Before I had those super long variable names I had thing like i, j, k, but my professor insists that we are not to use variables like that in the "professional world" and that even shortened variables like lenWord are insufficient because people might assume it stands for "Lennard's World Literature". He says to choose meaningful variable names but by doing so I feel like I've broken the Golden Rule of coding to keep it under 80 columns. How do I get around this? | Normally when I see code posted here like yours, I edit it, because we hate horizontal scroll. But since that's part of your question, I'll show you the edit here: int extractMessage(char keyWord[25], char cipherText[17424],
int rowSize, char message[388])
{
int keyColumn = 0;
int cipherColumn = 0;
int offset = 1;
int nextWord = 1;
int lengthOfWord = 0;
int lengthOfCipher = 0;
lengthOfWord = length(keyWord);
lengthOfCipher = length(cipherText);
while (keyWord[keyColumn] != cipherText[cipherColumn]) {
cipherColumn++;
if (keyWord[keyColumn + offset]
!= cipherText[cipherColumn + (rowSize*nextWord) + nextWord]) {
cipherColumn++;
continue;
}
}
} That break may be surprising, but it's more readable than the version with horizontal scroll, and it's better than shortening the names to i , j , and k . It's not that you should never use i , j , and k . Those are fine names when indexing 3 nested for loops. But here the names are really my only clue about what you expected to be happening. Especially since this code doesn't actually do anything. Best rule to follow on variable name length is scope. The longer a variable lives, the more fellow variables its name has to compete with. The name CandiedOrange is unique on stack exchange. If we were in a chat, you might just call me "Candy". But right now, you're in a scope where that name could be confused with Candide , Candy Chiu , or Candyfloss . So the longer the scope, the longer the name should be. The shorter the scope, the shorter the name can be. Line length should never dictate name length. If you feel like it is then find a different way to lay out your code. We have many tools to help you do that. One of the first things I look for is needless noise to get rid of. Unfortunately this example doesn't do anything, so it's all needless noise. I need something to work with so first let's make it do something. int calcCipherColumn(char keyWord[25], char cipherText[17424],
int rowSize, char message[388])
{
int keyColumn = 0;
int cipherColumn = 0;
int offset = 1;
int nextWord = 1;
int lengthOfWord = 0;
int lengthOfCipher = 0;
lengthOfWord = length(keyWord);
lengthOfCipher = length(cipherText);
while (keyWord[keyColumn] != cipherText[cipherColumn]) {
cipherColumn++;
if (keyWord[keyColumn + offset]
!= cipherText[cipherColumn + (rowSize*nextWord) + nextWord]) {
cipherColumn++;
continue;
}
}
return cipherColumn;
} There, now it does something. Now that it does something, I can see what I can get rid of. This length stuff isn't even used. This continue doesn't do anything either. int calcCipherColumn(char keyWord[25], char cipherText[17424],
int rowSize, char message[388])
{
int keyColumn = 0;
int cipherColumn = 0;
int offset = 1;
int nextWord = 1;
while (keyWord[keyColumn] != cipherText[cipherColumn]) {
cipherColumn++;
if (keyWord[keyColumn + offset]
!= cipherText[cipherColumn + (rowSize*nextWord) + nextWord]) {
cipherColumn++;
}
}
return cipherColumn;
} Let's make some minor white space tweaks, because we live in a world of source control and it's nice when the only reason a line gets reported as changed is because it's doing something different, not because part of it had to line up in a column. int calcCipherColumn(char keyWord[25], char cipherText[17424],
int rowSize, char message[388])
{
int keyColumn = 0;
int cipherColumn = 0;
int offset = 1;
int nextWord = 1;
while (keyWord[keyColumn] != cipherText[cipherColumn]) {
cipherColumn++;
if (keyWord[keyColumn + offset]
!= cipherText[cipherColumn + (rowSize*nextWord) + nextWord]) {
cipherColumn++;
}
}
return cipherColumn;
} Yeah, I know it's slightly less readable but otherwise you'll drive people crazy who use vdiff tools to detect changes. Now let's fix these silly line breaks that we have because we're trying to stay under line length limits. int calcCipherColumn(
char keyWord[25],
char cipherText[17424],
int rowSize,
char message[388]
) {
int keyColumn = 0;
int keyOffset = 1;
int nextWord = 1;
int cipherColumn = 0;
int cipherOffset = (rowSize * nextWord) + nextWord;
char key = keyWord[keyColumn];
char keyNext = keyWord[keyColumn + keyOffset];
while (key != cipherText[cipherColumn]) {
cipherColumn++;
if (keyNext != cipherText[cipherColumn + cipherOffset]) {
cipherColumn++;
}
}
return cipherColumn;
} There, now the logic in the loop is focused on what changes in the loop. In fact, everything except cipherColumn could be marked final . And hey! Look at that. We now have room to do it. All I did was add 3 more variables, rename one, and rearrange them a little. And the result just happened to make the lines short enough to fit without a silly linebreak on != . Sure the names key and keyNext are not that descriptive, but they each only get used once, don't live that long, and most importantly aren't doing anything that interesting in the loop. So they don't need to be. By introducing extra variables we now have room to make their names long if we need to. Things change, so eventually we may need to. If we do, it's nice that we have breathing room. I also took the liberty of showing you Jeff Grigg's form 6 variant style of laying out input parameters to respect line length restrictions. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/344411",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/265887/"
]
} |
344,474 | I am a user of SVN and now I am learning Git. In SVN I usually checkout on my local machine a repo, which includes all branches in my project and I used to select the folder for my branch I am interested to and work there. I see a difference using Git. Currently I am cloning a repo and clone a specific branch using gitk. The project folder contains only the content for that branch and I cannot see all branches as in SVN, which is a little confusing for me. I cannot find an easy way to see all branches in my local repository using Git. I would like to know if the Git process I described is "standard" and some how correct or I am missing something. Also I would like to know how to handle a process where I need to work on two branches at the same time in case, for example, I need to make an hotfix on master but keep the content of another branch too. What is a recommend name conventions to make the folders which include the branch cloned from the repo in Git, example myproject-branchname ? | I am a user of SVN and now I am learning GIT. Welcome to the gang! SVN Re-education In SVN I usually [...] Hold on for a moment. While CVS and SVN and other traditional (i.e. centralized) version control system fulfill (mostly) the same purpose as modern (i.e. distributed) version control systems like mercurial and Git, you'll be much better off learning Git from the ground up instead of trying to transfer your SVN workflow to Git. http://hginit.com ( view on archive.org ) by Joel Spolsky (one of the very founders of Stack Exchange) is a tutorial for mercurial, not Git, but it's zero-th chapter, "Subversion Re-education" is useful for people switching away from SVN to any distributed version control system, as it tells you what SVN concepts you have to (temporarily) un -learn (or to stash away , as Git users might say) to be able wrap your head around the distributed version control system concepts and the idiomatic workflows established to work with them. So you can read that zero-th chapter and mostly just replace the word "mercurial" with "Git" and thereby properly prepare yourself and your mind for Git. The fine print (You might skip this for now.) While mercurial and Git are much more similar to each other than to SVN, there are some conceptual differences between them, so some of the statements and advice in "Subversion Re-education" will become technically wrong when replacing "mercurial" with "Git": While mercurial internally tracks and stores changesets, Git internally tracks and stores revisions (i.e. states of the content of a directory tree), just like Subversion does. But other than Subversion Git performs merges by looking at the differences between each involved branch respectively and a common ancestor revision (a true 3-point-merge), so the result is much the same as for mercurial: Merging is much easier and less error-prone than in SVN. While you can branch in Git by cloning the repository (as is customary in mercurial), it's much more common to create a new branch within a Git repository. (That's because Git branches are simply (moving) pointers to revisions, whereas mercurial branches are permanent labels applied to every revision. These permanent labels are usually unwanted, so mercurial workflows usually work by cloning the complete repository for diverging development.) In SVN, everything is a directory (but you shouldn't necessarily treat it as such) But I've been interrupting you. You were saying? In SVN I usually checkout on my local machine a repo, which includes all branches in my project and I used to select the folder for my branch I am interested to and work there. If, by that, you mean you've checked out the SVN repository's root folder (or any other folder corresponding to more than to trunk or to a single branch, e.g. in the conventional SVN repo layout the branches folder containing all non-trunk branches) then I dare say you've probably used SVN wrong(ish), or at least abused a bit the fact that trunk, branches and tags are all folded into a single (though hierarchical) namespace together with directories within the revisions/codebase. While it might be tempting to change several SVN branches in parallel, that is AFAIK not how SVN is intended to be used. (Though I'm unsure about what specific downsides working like that might have.) In Git, only directories are directories Every clone of a Git repository is itself a Git repository: By default, it gets a full copy of the origin repository's history, including all revisions, branches and tags. But it will keep all that our of your way: Git stores it in a file-based database of sorts, located in the repository's root folder's hidden .git/ subfolder. But what are the non-hidden files you see in the repository folder? When you git clone a repository, Git does several things. Roughly: It creates the target folder , and in it, the .git/ subfolder with the "database" It transfers the references (tags, branches etc.) of the origin repository's database and makes a copy of them in the new database, giving them a slighly modified name that marks them as "remote" references. It transfers all the revisions (i.e. file tree states) that these references point to, as well all revisions that these revisions point to directly or transitively (their parents and ancestors) to the new database and stores them, so that the new remote references actually point to something that's available in the new repository. It creates a local branch tracking the remote revision corresponding to the origin repository's default branch (usually master ). It checks out that local branch. That last step means that Git looks up the revision that this branch points at, and unpacks the file-tree stored there (the database uses some means of compression and de-duplication) into the repository's root folder. That root folder and all its subfolders (excluding the special .git subfolder) are known as your repository's "working copy". That's where you interact with the content of the currently checked-out revision/branch. They're just normal folders and files. However, to interact with the repository per-se (the "database" of revisions and references) you use git commands. Seeing Git branches and interacting with Git branches Currently I am cloning a repo and clone a specific branch using gitk. The version of gitk I got cannot clone repositories. It can only view repo history and create branches and check out branches. Also, there's no "cloning" a branch. You can only clone repositories. Did you mean you clone a repo using git clone ... and then, using gitk , check out a branch? The project folder contains only the content for that branch and I cannot see all branches as in SVN, which is a little confusing for me. [...] I would like to know if the git process I described is "standard" and some how correct [...] Yes, that is pretty standard: Clone a repository using git clone ... Check out the branch you want to work on with git checkout ... or using a graphical tool like gitk I cannot find an easy way to see all branches in my local repository using GIT. [...] or I am missing something. Maybe: you can list local branches with git branch you can list remote branches with git branch -r or all branches with git branch -a you can use gitk to view the complete history (all branches tags etc. that your local repo knows about) by invoking it with gitk --all How to work with multiple branches in parallel Also I would like to know how to handle a process where I need to work on two branches at the same time in case, for example, I need to make an hotfix on master but keep the content of another branch too. There's different scenarios here: A new (yet to be created) change needs to be applied to multiple branches Use this workflow: Create a new branch c from a revision that already is in the ancestry of all these branches (e.g. the revision that introduced the bug when the change will be a bugfix) or from a revision that (including all its ancestors) is acceptable to be introduced in all these branches. Make and commit the change on that new branch c . For each branch b that needs the change: Check out b : git checkout b Merge c into b : git merge c Remove branch c : git branch --delete c An existing change is needed on a different branch (... but without the other changes made on the where that change resides) Check out the branch where the change is needed Cherry-pick the revision(s) making the change, in order On one branch a , you want to change one or some files to the exact state they have on a different branch b Check out a Get the file contents from branch b : git checkout b -- path/to/a/file.txt path/to/another/file.cpp or/even/a/complete/directory/ ... (Other than git checkout without passing paths, this won't switch to branch b , only get the requested file contents from there. These files might or might not already exist on a . If they do, they're overwritten with their content on b .) While working on one branch, you want to look at how things are on another branch Check out the branch you want to work on. Then, for looking at the other branch, either use graphical tools that allow you to view the contents of not currently checked out revisions (e.g. in gitk try to switch the radio buttons from "patch" to "tree") or clone the repository to a temporary directory and check out the other branch there or use git worktree to create a separate working directory of the same repository (i.e. also using the database in .git/ directory of your current local repository) where you can check out that other branch | {
"source": [
"https://softwareengineering.stackexchange.com/questions/344474",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/265959/"
]
} |
344,522 | This is going to be a very non-technical, soft question and I am not sure if this is the right platform. But I am a beginning CS student so I hope you guys tolerate it. In the first semester we were introduced to OOP concepts like encapsulation, data hiding, modularity, inheritance and so on through Java and UML. (Java is my first programming language) The way I understand it, OOP is a way of managing software complexity. But its principles are not new or unique, they are in a sense universal to all engineering fields. For example a car is a very complex structure whose complexity is managed by a hierarchy of modular and encapsulated components with well-defined behaviors and interfaces. But I do not understand the reason behind introducing a new programming paradigm. I think all the principles used for managing complexity can be realized by procedural programming languages. For example, for modularity we can just divide the program into many small programs that perform well-defined tasks whose code is contained in separate files. These programs would interact with each other through their well-defined input and output. The files may be protected (encrypted?) to achieve encapsulation. For code re-use we can just call those files whenever they are needed in new programs. Doesn't this capture all what OOP is or am I missing something very obvious? I am not asking for a proof that OOP manages complexity. In my opinion it certainly does. But I think all the principles used to manage complexity like modularity, encapsulation, data hiding and so on can be very easily implemented by procedural languages. So why really OOP if we can manage complexity without it? | In the first semester we were introduced to OOP concepts like encapsulation, data hiding, modularity, inheritance and so on through Java and UML. (Java is my first programming language) None of those are OOP concepts. They all exist outside of OO, independent of OO and many even were invented before OO. So, if you think that that is what OO is all about, then your conclusion is right: you can do all of those in procedural languages, because they have nothing to do with OO . For example, one of the seminal papers on Modularity is On the Criteria To Be Used in Decomposing Systems into Modules . There is no mention of OO in there. (It was written in 1972, by then OO was still an obscure niche, despite already being more than a decade old.) While Data Abstraction is important in OO, it is more a consequence of the primary feature of OO (Messaging) than it is a defining feature. Also, it is very important to remember that there are different kinds of data abstraction. The two most common kinds of data abstraction in use today (if we ignore "no abstraction whatsoever", which is probably still used more than the other two combined), are Abstract Data Types and Objects . So, just by saying "Information Hiding", "Encapsulation", and "Data Abstraction", you have said nothing about OO, since OO is only one form of Data Abstraction, and the two are in fact fundamentally different: With Abstract Data Types, the mechanism for abstraction is the type system ; it is the type system that hides the implementation. (The type system need not necessarily be static.) With Objects, the implementation is hidden behind a procedural interface , which doesn't require types. (For example, it can be implemented with closures, as is done in ECMAScript.) With Abstract Data Types, instances of different ADTs are encapsulated from each other, but instances of the same ADT can inspect and access each other's representation and private implementation. Objects are always encapsulated from everything . Only the object itself can inspect its own representation and access its own private implementation. No other object , not even other objects of the same type, other instances of the same class, other objects having the same prototype, clones of the object, or whatever can do that. None . What this means, by the way, is that in Java, classes are not object-oriented. Two instances of the same class can access each other's representation and private implementation. Therefore, instances of classes are not objects, they are in fact ADT instances. Java interface s, however, do provide object-oriented data abstraction. So, in other words: only instances of interfaces are objects in Java, instances of classes are not. Basically, for types, you can only use interfaces. This means parameter types of methods and constructors, return types of methods, types of instance fields, static fields, and local fields, the argument to an instanceof operator or a cast operator, and type arguments for a generic type constructor must always be interfaces. A class may be used only directly after the new operator, nowhere else. For example, for modularity we can just divide the program into many small programs that perform well-defined tasks whose code is contained in separate files. These programs would interact with each other through their well-defined input and output. The files may be protected (encrypted?) to achieve encapsulation. For code re-use we can just call those files whenever they are needed in new programs. Doesn't this capture all what OOP is or am I missing something very obvious? What you describe is OO. That is indeed a good way to think about OO. In fact, that's pretty much exactly what the original inventors of OO had in mind. (Alan Kay went one step further: he envisioned lots of little computers sending messages to each other over the network.) What you call "program" is usually called an "object" and instead of "call" we usually say "send a message". Object Orientation is all about Messaging (aka dynamic dispatch ). The term "Object Oriented" was coined by Dr. Alan Kay, the principal designer of Smalltalk, and he defines it like this : OOP to me means only messaging, local retention and protection and hiding of state-process, and extreme late-binding of all things. Let's break that down: messaging ("virtual method dispatch", if you are not familiar with Smalltalk) state-process should be locally retained protected hidden extreme late-binding of all things Implementation-wise, messaging is a late-bound procedure call, and if procedure calls are late-bound, then you cannot know at design time what you are going to call, so you cannot make any assumptions about the concrete representation of state. So, really it is about messaging, late-binding is an implementation of messaging and encapsulation is a consequence of it. He later on clarified that " The big idea is 'messaging' ", and regrets having called it "object-oriented" instead of "message-oriented", because the term "object-oriented" puts the focus on the unimportant thing (objects) and distracts from what is really important (messaging): Just a gentle reminder that I took some pains at the last OOPSLA to try to remind everyone that Smalltalk is not only NOT its syntax or the class library, it is not even about classes. I'm sorry that I long ago coined the term "objects" for this topic because it gets many people to focus on the lesser idea. The big idea is "messaging" -- that is what the kernal of Smalltalk/Squeak is all about (and it's something that was never quite completed in our Xerox PARC phase). The Japanese have a small word -- ma -- for "that which is in between" -- perhaps the nearest English equivalent is "interstitial". The key in making great and growable systems is much more to design how its modules communicate rather than what their internal properties and behaviors should be. Think of the internet -- to live, it (a) has to allow many different kinds of ideas and realizations that are beyond any single standard and (b) to allow varying degrees of safe interoperability between these ideas. (Of course, today, most people don't even focus on objects but on classes, which is even more wrong.) Messaging is fundamental to OO, both as metaphor and as a mechanism. If you send someone a message, you don't know what they do with it. The only thing you can observe, is their response. You don't know whether they processed the message themselves (i.e. if the object has a method), if they forwarded the message to someone else (delegation / proxying), if they even understood it. That's what encapsulation is all about, that's what OO is all about. You cannot even distinguish a proxy from the real thing, as long as it responds how you expect it to. A more "modern" term for "messaging" is "dynamic method dispatch" or "virtual method call", but that loses the metaphor and focuses on the mechanism. So, there are two ways to look at Alan Kay's definition: if you look at it standing on its own, you might observe that messaging is basically a late-bound procedure call and late-binding implies encapsulation, so we can conclude that #1 and #2 are actually redundant, and OO is all about late-binding. However, he later clarified that the important thing is messaging, and so we can look at it from a different angle: messaging is late-bound. Now, if messaging were the only thing possible, then #3 would trivially be true: if there is only one thing, and that thing is late-bound, then all things are late-bound. And once again, encapsulation follows from messaging. Similar points are also made in On Understanding Data Abstraction, Revisited by William R. Cook and also his Proposal for Simplified, Modern Definitions of "Object" and "Object Oriented" : Dynamic dispatch of operations is the essential characteristic of objects. It means that the operation to be invoked is a dynamic property of the object itself. Operations cannot be identified statically, and there is no way in general to [know] exactly what operation will executed in response to a given request, except by running it. This is exactly the same as with first-class functions, which are always dynamically dispatched. In Smalltalk-72, there weren't even any objects! There were only message streams that got parsed, rewritten and rerouted. First came methods (standard ways to parse and reroute the message streams), later came objects (groupings of methods that share some private state). Inheritance came much later, and classes were only introduced as a way to support inheritance. Had Kay's research group already known about prototypes, they probably would have never introduced classes in the first place. Benjamin Pierce in Types and Programming Languages argues that the defining feature of Object-Orientation is Open Recursion . So: according to Alan Kay, OO is all about messaging. According to William Cook, OO is all about dynamic method dispatch (which is really the same thing). According to Benjamin Pierce, OO is all about Open Recursion, which basically means that self-references are dynamically resolved (or at least that's a way to think about), or, in other words, messaging. As you can see, the person who coined the term "OO" has a rather metaphysical view on objects, Cook has a rather pragmatic view, and Pierce a very rigorous mathematical view. But the important thing is: the philosopher, the pragmatist and the theoretician all agree! Messaging is the one pillar of OO. Period. Note that there is no mention of inheritance here! Inheritance is not essential for OO. In general, most OO languages have some way of implementation re-use but that doesn't necessarily have to be inheritance. It could also be some form of delegation, for example. In fact, The Treaty of Orlando discusses delegation as an alternative to inheritance and how different forms of delegation and inheritance lead to different design points within the design space of object-oiented languages. (Note that actually even in languages that support inheritance, like Java, people are actually taught to avoid it, again indicating that it is not necessary for OO.) | {
"source": [
"https://softwareengineering.stackexchange.com/questions/344522",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/266024/"
]
} |
344,570 | I often find myself writing functions that look like this because they
allow me to easily mock data access, and still provide a signature that accepts parameters to determine what data to access. public static string GetFormattedRate(
Func<string, RateType>> getRate,
string rateKey)
{
var rate = getRate(rateKey);
var formattedRate = rate.DollarsPerMonth.ToString("C0");
return formattedRate;
} Or public static string GetFormattedRate(
Func<RateType, string> formatRate,
Func<string, RateType>> getRate,
string rateKey)
{
var rate = getRate(rateKey);
var formattedRate = formatRate(rate);
return formattedRate;
} Then I use it something like this: using FormatterModule;
public static Main()
{
var getRate = GetRateFunc(connectionStr);
var formattedRate = GetFormattedRate(getRate, rateType);
// or alternatively
var formattedRate = GetFormattedRate(getRate, FormatterModule.FormatRate, rateKey);
System.PrintLn(formattedRate);
} Is this a common practice? I feel like I should be doing something more like public static string GetFormattedRate(
Func<RateType> getRate())
{
var rate = getRate();
return rate.DollarsPerMonth.ToString("C0");
} But that doesn't seem to work very well because I'd have to make a new function to pass into the method for every rate type. Sometimes I feel like I should be doing public static string GetFormattedRate(RateType rate)
{
return rate.DollarsPerMonth.ToString("C0");
} But that seems to take away any fetch and format re-usability. Whenever I want to fetch and format I have to write two lines, one to fetch and one to format. What am I missing about functional programming? Is this the right way to do it, or is there a better pattern that's both easy to maintain and use? | There is absolutely no reason to pass a function, and its parameters, only to then call it with those parameters. In fact, in your case you have no reason to pass a function at all . The caller might as well just call the function itself and pass the result. Think about it - instead of using: var formattedRate = GetFormattedRate(getRate, rateType); why not simply use: var formattedRate = GetFormattedRate(getRate(rateType)); ? As well as reducing unnecessary code it also reduces coupling - if you want to change how the rate is fetched (say, if getRate now needs two arguments) you don't have to change GetFormattedRate . Likewise, there's no reason to write GetFormattedRate(formatRate, getRate, rateKey) instead of writing formatRate(getRate(rateKey)) . Don't overcomplicate things. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/344570",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/115850/"
]
} |
344,599 | I keep hearing the term and all google searches lead me to articles on compilers. I just wanna understand what the term compile target means :| UPDATE: To give some context: I've heard it said that web assembly is a compile target for for other languages such as C, C++, Rust etc. | Compilers are, in essence, translators that take input in one language and produce output in another. For example, Eiffel Software's compiler takes Eiffel-language input and produces C. GCC for Intel reads C-language input and produces x86 assembly. The GAS assembler for Intel takes x86 assembly and produces x86 object code. All three of these things are technically compilers. Regardless of format, the input read by a compiler is called the source and the output is called the target . The latter term is taken from one of its definitions, "intended result." The majority of compilers are designed to produce assembly or object code for a particular processor or architecture. Because of that, target is often used to refer to the architecture itself rather than the output format. The target of a compiler does not need to be the same as the architecture where it runs, and in instances where that happens, the program is called a cross-compiler . (For example, GCC can be built to run on x86 systems to compile C into ARM assembly.) Additionally, there are single compilers capable of producing output for different targets depending on input such as switches on the command line. These are called multi-target compilers. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/344599",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/199195/"
]
} |
344,603 | Right now I am working with embedded systems and figuring out ways to implement strings on a microprocessor with no operating system. So far what I am doing is just using the idea of having NULL terminated character pointers and treating them as strings where the NULL signifies the end. I know that this is fairly common, but can you always count on this to be the case? The reason I ask is that I was thinking about maybe using a real time operating system at some point, and I'd like to re-use as much as my current code as possible. So for the various choices that are out there, can I pretty much expect the strings to work the same? Let me be more specific though for my case. I am implementing a system that takes and processes commands over a serial port. Can I keep my command processing code the same, and then expect that the string objects created on the RTOS (which contains the commands) to all be NULL terminated? Or, would it be different based on the OS? Update After being advised to take a look at this question I have determined that it does not exactly answer what I am asking. The question itself is asking if a string's length should always be passed which is entirely different than what I am asking, and although some of the answers had useful information in them they are not exactly what I am looking for. The answers there seemed to give reasons why or why not to terminate a string with a null character. The difference with what I am asking is if I can more or less expect the in-born strings of different platforms to terminate their own strings with null, without having to go out and try every single platform out there if that makes sense. | The things that are called "C strings" will be null-terminated on any platform. That's how the standard C library functions determine the end of a string. Within the C language, there's nothing stopping you from having an array of characters that doesn't end in a null. However you will have to use some other method to avoid running off the end of a string. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/344603",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/219569/"
]
} |
344,656 | It sucks being on the critical path as normal developer, especially if you're late. When you're the senior developer, that the team is looking to for leadership, it's even worse. When work for most of the team is stalled waiting on some critical piece what should the rest of the team do? We have limited access to the critical piece so others will simply be waiting no matter what we do. When the others are looking for advice on what to do what's a good answer? | Improve unit tests, functional tests, documentation, tools, etc. There's a plethora of things that can be done in down-time while waiting for the critical path to catch up. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/344656",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/131624/"
]
} |
344,827 | I am a C++ programmer with limited experience. Supposing I want to use an STL map to store and manipulate some data, I would like to know if there is any meaningful difference (also in performance) between those 2 data structure approaches: Choice 1:
map<int, pair<string, bool> >
Choice 2:
struct Ente {
string name;
bool flag;
}
map<int, Ente> Specifically, is there any overhead using a struct instead of a simple pair ? | Choice 1 is ok for small "used only once" things. Essentially std::pair is still a struct.
As stated by this comment choice 1 will lead to really ugly code somewhere down the rabbit hole like thing.second->first.second->second and no one really wants to decipher that. Choice 2 is better for everything else, because it is easier to read what the meaning of the things in the map are. It is also more flexible if you want to change the data (for example when Ente suddenly needs another flag). Performance should not be an issue here. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/344827",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/263163/"
]
} |
344,963 | Another question was asked regarding the use of IP addresses to identify individual clients. I think I understand why an IP address is insufficient. But what about the socket, which has more information and, from what I understand, is stateful? Couldn't that be potentially used in lieu of a cookie? | A socket identifies a connection . Cookies are usually used to identify a user . If I open two browser tabs to SE.SE, I will have two connections and thus two sockets. But I want my settings to persist across both of them. (In fact, typically, a browser opens multiple sockets for one page in order to speed up page load time; I believe most browsers have a default maximum value between 4 and 10 sockets per page.) And the opposite can happen as well: if I close my browser tab, another user on the machine may open a browser tab to SE.SE, and may get the same quadruple of (source_ip, source_port, target_ip, target_port), in which case, he will get all my settings. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/344963",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/-1/"
]
} |
345,006 | What is the reason to use release version as a release title on GitHub? It looks like this And it looks like a common practise. Almost all popular repositories use it. What is the use case to duplicate release tag to the release title? UPDATE : Examples: Facebook React , Atom , Kubernetes , CryEngine | How to manage release with Git and GitHub ? The Git standard way of identifying a release is to create a version tag . This tag marks a specific version of your software in the change history of your repository. Most teams work with tags, because these are directly available in the repository and can be used in git commands. The releases are a GitHub feature for packaging software for delivery. This allows to add some downloadable binaries associated to the release. So, in practical terms, the release is some added web content related to a tagged version; it's not something known in you local repository. To create a release on GitHub, you have to enter a mandatory new tag identifier (that will be created to identify the release) and an optional release name. Naming conventions As the tags are the primary identification of a release, it's managed with care. Usually it follows the semantic versioning convention (or some variant). For the name, there is no universal convention. But if it is left empty, GitHub will simply take over the version tag that you've just created. This is why so many projects reuse the tag id for the release name: it's not a deliberate choice; they don't even have to do a copy-paste; it's just that they had no desire/time/interest in using a more creative description, and let GitHub define it by default ! You can of course use a different convention. You could perfectly use a code name (e.g. " Longhorn SP2 " instead of "v6.0.6002" like Microsoft is doing for Windows, or " Ice cream sandwich " instead of "v4.0.4" like Google is doing for Android). But maintaining such a naming standard in the long run requires a lot of creative people if you want to keep the names unique. More realistic is a mixed approach: use the default version tag for minor releases, but identify a codename for important releases (especially if these are significant for marketing) You could also think of identifying main new features. However this is of very limited use. First, if you're adept of separating corrective releases and functional releases (as proposed by ITSM version release management guidelines), you would have some troubles finding a meaningful name for half of your releases. Then, this scheme works only with small software: if you have an enterprise grade software, the main functions would be far too difficult to summarize in the couple of words that remain visible on the GitHub release page. This kind of information is best put in a release note. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/345006",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/202455/"
]
} |
345,018 | It seems pretty clear that "Single Responsibility Principle" does not mean "only does one thing." That's what methods are for. public Interface CustomerCRUD
{
public void Create(Customer customer);
public Customer Read(int CustomerID);
public void Update(Customer customer);
public void Delete(int CustomerID);
} Bob Martin says that "classes should have only one reason to change." But that's difficult to wrap your mind around if you're a programmer new to SOLID. I wrote an answer to another question , where I suggested that responsibilities are like job titles, and danced around the subject by using a restaurant metaphor to illustrate my point. But that still doesn't articulate a set of principles that someone could use to define their classes' responsibilities. So how do you do it? How do you determine which responsibilities each class should have, and how do you define a responsibility in the context of SRP? | One way to wrap your head around this is to imagine potential requirements changes in future projects and ask yourself what you will need to do to make them happen. For example: New business requirement: Users located in California get a special discount. Example of "good" change: I need to modify code in a class that computes discounts. Example of bad changes: I need to modify code in the User class, and that change will have a cascading effect on other classes that use the User class, including classes that have nothing to do with discounts, e.g. enrollment, enumeration, and management. Or: New nonfunctional requirement: We'll start using Oracle instead of SQL Server Example of good change: Just need to modify a single class in the data access layer that determines how to persist the data in the DTOs. Bad change: I need to modify all of my business layer classes because they contain SQL Server-specific logic. The idea is to minimize the footprint of future potential changes, restricting code modifications to one area of code per area of change. At the very minimum, your classes should separate logical concerns from physical concerns. A great set of examples can be found in the System.IO namespace: there we can find a various kinds of physical streams (e.g. FileStream , MemoryStream , or NetworkStream ) and various readers and writers ( BinaryWriter , TextWriter ) that work on a logical level. By separating them this way, we avoid combinatoric explosion: instead of needing FileStreamTextWriter , FileStreamBinaryWriter , NetworkStreamTextWriter , NetworkStreamBinaryWriter , MemoryStreamTextWriter , and MemoryStreamBinaryWriter , you just hook up the writer and the stream and you can have what you want. Then later on we can add, say, an XmlWriter , without needing to re-implement it for memory, file, and network separately. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/345018",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/1204/"
]
} |
345,053 | In a library in Java 7, I have a class which provides services to other classes. After creating an instance of this service class, one method of it may be called several times (let’s call it the doWork() method). So I do not know when the work of the service class is completed. The problem is the service class uses heavy objects and it should release them. I set this part in a method (let’s call it release() ), but it is not guaranteed that other developers will use this method. Is there a way to force other developers to call this method after completing the task of service class? Of course I can document that, but I want to force them. Note: I cannot call the release() method in the doWork() method, because doWork() needs those objects when it is called in next. | The pragmatic solution is to make the class AutoCloseable , and provide a finalize() method as a backstop (if appropriate ... see below!). Then you rely on users of your class to use try-with-resource or call close() explicitly. Of course I can document that, but I want to force them. Unfortunately, there is no way 1 in Java to force the programmer to do the right thing. The best you could hope to do is to pick up incorrect usage in a static code analyser. On the topic of finalizers. This Java feature has very few good use cases. If you rely on finalizers to tidy up, you run into the problem that it can take a very long time for the tidy-up to happen. The finalizer will only be run after the GC decides that the object is no longer reachable. That may not happen until the JVM does a full collection. Thus, if the problem you are tying to solve is to reclaim resources that need to be released early , then you have missed the boat by using finalization. And just in case you haven't got what I am saying above ... it is almost never appropriate to use finalizers in production code, and you should never rely on them! 1 - There are ways. If you are prepared to "hide" the service objects from user code or tightly control their lifecycle (e.g. https://softwareengineering.stackexchange.com/a/345100/172 ) then the users code doesn't need to call release() . However, the API becomes more complicated, restricted ... and "ugly" IMO. Also, this is not forcing the programmer to do the right thing . It is removing the programmer's ability to do the wrong thing! | {
"source": [
"https://softwareengineering.stackexchange.com/questions/345053",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/174635/"
]
} |
345,458 | If one needs different JVMs for different architectures I can't figure out what is the logic behind introducing this concept. In other languages we need different compilers for different machines, but in Java we require different JVMs so what is the logic behind introducing the concept of a JVM or this extra step?? | The logic is that JVM bytecode is a lot simpler than Java source code. Compilers can be thought of, at a highly abstract level, as having three basic parts: parsing, semantic analysis, and code generation. Parsing consists of reading the code and turning it into a tree representation inside the compiler's memory. Semantic analysis is the part where it analyzes this tree, figures out what it means, and simplifies all the high-level constructs down to lower-level ones. And code generation takes the simplified tree and writes it out into a flat output. With a bytecode file, the parsing phase is greatly simplified, since it's written in the same flat byte stream format that the JIT uses, rather than a recursive (tree-structured) source language. Also, a lot of the heavy lifting of the semantic analysis has already been performed by the Java (or other language) compiler. So all it has to do is stream-read the code, do minimal parsing and minimal semantic analysis, and then perform code generation. This makes the task the JIT has to perform a lot simpler, and therefore a lot faster to execute, while still preserving the high-level metadata and semantic information that makes it possible to theoretically write single-source, cross-platform code. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/345458",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/266363/"
]
} |
345,593 | I am dealing with a pretty big codebase and I was given a few months to refactor existing code. The refactor process is needed because soon we will need to add many new features to our product and as for now we are no longer able to add any feature without breaking something else. In short: messy, huge, buggy code, that many of us have seen in their careers. During refactoring, from time to time I encounter the class, method or lines of code which have comments like Time out set to give Module A some time to do stuff. If its not timed like this, it will break. or Do not change this. Trust me, you will break things. or I know using setTimeout is not a good practice, but in this case I had to use it My question is: should I refactor the code when I encounter such warnings from the authors (no, I can't get in touch with authors)? | It seems you are refactoring "just in case", without knowing exactly which parts of the codebase in detail will be changed when the new feature development will take place. Otherwise, you would know if there is a real need to refactor the brittle modules, or if you can leave them as they are. To say this straight: I think this is a doomed refactoring strategy . You are investing time and money of your company for something where noone knows if it will really return a benefit, and you are on the edge of making things worse by introducing bugs into working code. Here is a better strategy: use your time to add automatic tests (probably not unit tests, but integration tests) to the modules at risk. Especially those brittle modules you mentioned will need a full test suite before you change anything in there. refactor only the bits you need to bring the tests in place. Try to minimize any of the necessary changes. The only exception is when your tests reveal bugs - then fix them immediately (and refactor to the degree you need to do so safely). teach your colleagues the "boy scout principle" (AKA "opportunistic refactoring" ), so when the team starts to add new features (or to fix bugs), they should improve and refactor exactly the parts of the code base they need to change, not less, not more. get a copy of Feather's book "Working effectively with legacy code" for the team. So when the time comes when you know for sure you need to change and refactor the brittle modules (either because of the new feature development, or because the tests you added in step 1 reveal some bugs), then you and your team are ready to refactor the modules, and more or less safely ignore those warning comments. As a reply to some comments : to be fair, if one suspects a module in an existing product to be the cause of problems on a regular basis, especially a module which is marked as "don't touch", I agree with all of you. It should be reviewed, debugged and most probably refactored in that process (with support of the tests I mentioned, and not necessarily in that order). Bugs are a strong justification for change, often a stronger one than new features. However, this is a case-by-case decision. One has to check very carefully if it is really worth the hassle to change something in a module which was marked as "don't touch". | {
"source": [
"https://softwareengineering.stackexchange.com/questions/345593",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/182425/"
]
} |
345,594 | My app has a large dependency on external endpoints. As time passes some of them are changing and evolving. Some of these changes are breaking changes... So far i have about 10 dependencies for a single project (i have other side-projects too) and keeping it up-to-date is not the easiest task of all... Integration tests seems too broad and too complex to manage this scenario... What other kind of tests should i consider for my apps? Is there a way to have theses tests run automatically like once a day also? | It seems you are refactoring "just in case", without knowing exactly which parts of the codebase in detail will be changed when the new feature development will take place. Otherwise, you would know if there is a real need to refactor the brittle modules, or if you can leave them as they are. To say this straight: I think this is a doomed refactoring strategy . You are investing time and money of your company for something where noone knows if it will really return a benefit, and you are on the edge of making things worse by introducing bugs into working code. Here is a better strategy: use your time to add automatic tests (probably not unit tests, but integration tests) to the modules at risk. Especially those brittle modules you mentioned will need a full test suite before you change anything in there. refactor only the bits you need to bring the tests in place. Try to minimize any of the necessary changes. The only exception is when your tests reveal bugs - then fix them immediately (and refactor to the degree you need to do so safely). teach your colleagues the "boy scout principle" (AKA "opportunistic refactoring" ), so when the team starts to add new features (or to fix bugs), they should improve and refactor exactly the parts of the code base they need to change, not less, not more. get a copy of Feather's book "Working effectively with legacy code" for the team. So when the time comes when you know for sure you need to change and refactor the brittle modules (either because of the new feature development, or because the tests you added in step 1 reveal some bugs), then you and your team are ready to refactor the modules, and more or less safely ignore those warning comments. As a reply to some comments : to be fair, if one suspects a module in an existing product to be the cause of problems on a regular basis, especially a module which is marked as "don't touch", I agree with all of you. It should be reviewed, debugged and most probably refactored in that process (with support of the tests I mentioned, and not necessarily in that order). Bugs are a strong justification for change, often a stronger one than new features. However, this is a case-by-case decision. One has to check very carefully if it is really worth the hassle to change something in a module which was marked as "don't touch". | {
"source": [
"https://softwareengineering.stackexchange.com/questions/345594",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/266922/"
]
} |
345,688 | Update: Without fluent interface, builder pattern can still be done, see my implementation . Edit for possible duplication issues: When should the builder design pattern be used? : My question is about the actual advantages of Builder Pattern( of GoF ). And the chosen answer in the link is about Bloch Builder, which may(See @amon 's answer) or may not be a pattern. Design Patterns are NOT for solving a specific problem(telescoping constructor or so). See reference. So, what make something be a pattern? (The LHS are what pointed out by John Vlissides. The RHS my opinion.) Recurrence. (A pattern should be general, so it can be applied to many problems.) Teaching. (It should let me know how to improve my current solution scenario.) It has a name. (For more effective conversation.) Reference: Pattern Hatching: Design Patterns Applied written by John Vlissides . Chapter one: common mis-understandings about design patterns, if I remember. GoF's implementation of Builder in real life : You can read this answer before reading my notes about builder pattern. It's a great answer, but still, it doesn't solve my questions. And the title is not related. UML of Builder Pattern: Reference design-patterns-stories.com I've read the book of GoF about builder pattern again and again, the followings are some of my notes: Director : A director can be reused. And it contains the algorithm(s) to build a product, step by step. Use the interface provided by Builder . Logically, it create the product. Builder : A builder should be general enough (to let almost any possible different ConcreteBuilder to build their corresponding product. This is the origin of my second question below.) ConcreteBuilder : A concrete builder builds all the parts needed, and know how to assemble them. And keeps track of its product. (It contains a reference of its product.) Client get their final product from a concrete builder. (It's ConcreteBuilder who has the getProduct() method, Builder don't have getProduct() (abstract) method.) Product : It's the complex object to be built. For every ConcreteBuilder , there is a corresponding Product . (This is the origin of my first question below.) And it provide the interface for its corresponding concrete builder to build the logical parts and to assemble them. This is why people confused about Bloch builder and builder pattern of GoF. Bloch builder just makes the interface easier to be used by the concrete builder. (Btw, how to indent this line...) The client : Choose a concrete builder he needs. (Implementor) And choose a director he needs. (Logic) Then inject the concrete builder into the director. Call director's construct() method. Get the product by calling getProduct() of concrete builder. It takes me a lot of effort to remember all these rules, but I have some questions: First: If the Product is complex enough, it should be a bad design. Second: Ok, if you say it's not a bad design. So how can you design the Builder interface to satisfy any ConcreteProduct ? The followings are advantages of the Builder Pattern for me : The Scope: A ConcreteBuilder constrains all components it needs in the same scope. So the client of the Builder don't see anything about/related to Product . Less Parameters in Builder : Since all the methods inside the ConcreteBuilder can share those variables, the methods of ConcreteBuilder should be easier to read and write. (From the book Clean Code , the more parameters a method has, the worse.) Dependency Inversion Principle : The Builder interface plays a key role in the builder pattern. Why? Because now both Director (the logic) and ConcreteBuilder (the implementation) follow the same rule to build a thing. After all, I'm not sure. I need the actual answer. I appreciate any different perspectives about what is a builder pattern. Different people will have different definition of their own, but here I'm talking about Builder Pattern of GoF. So please read carefully. Days before, I followed the answer of @Aaron in When would you use the Builder Pattern? [closed] , and thought that was a builder pattern of GoF. Then I post my implementation practice at CodeReview. But people there pointed out that it's not a Builder Pattern. Like @Robert Harvey, I disagreed about it. So I come here for the real answer. | The GoF Builder pattern is one of the less important patterns suggested in the Design Patterns book. I haven't seen applications of the pure GoF Builder pattern outside of parsers, document converts, or compilers. A builder-like API is often used to assemble immutable objects, but that ignores the flexibility of interchangeable concrete builders that the GoF envisioned. The name “Builder Pattern” is now more commonly associated with Joshua Bloch's Builder Pattern, which intends to avoid the problem of too many constructor parameters. If not only applied to constructors but to other methods, this is technique also known as the Method Object Pattern . The Go4 Builder Pattern is applicable when: the object cannot be constructed in one call. This might be the case when the product is immutable, or contains complex object graphs e.g. with circular references. you have one build process that should build many different kinds of products. The latter point is the key to the GoF builder pattern: our code isn't restricted to a specific product, but operates in terms of an abstract builder strategy. An UML diagram for a Builder is very similar to the Strategy Pattern, with the difference that the Builder accumulates data until it creates an object through a non-abstract method. An example might help. The introductory example for the Builder Pattern in the Design Patterns book is a text converter or document builder. A text document (in their model) contains text, the text can have different fonts, and text can be separated by paragraph breaks. An abstract builder provides functions for these. A user (or “director”) can then use this abstract builder interface to assemble the document. E.g. this text Lorem ipsum. Dolor sit? Amet. Might be created like void CreateExampleText(ITextBuilder b)
{
b.SwitchFont(Font.DEFAULT);
b.AddText("Lorem ipsum.");
b.AddParagraphBreak();
b.SwitchFont(Font.BOLD);
b.AddText("Dolor sit?");
b.SwitchFont(Font.DEFAULT);
b.AddParagraphBreak();
b.SwitchFont(Font.ITALIC);
b.AddText("Amet.");
} Now the notable thing is that the CreateExampleText() function does not depend on any specific builder, and does not assume any particular document type. We can therefore create concrete builders for plain text, HTML, TeX, Markdown, or any other format. The plain text builder would of course not be able to represent different fonts, so that method would be empty: interface ITextBuilder {
void SwitchFont(Font f);
void AddText(string s);
void AddParagraphBreak();
}
class PlainTextBuilder : ITextBuilder
{
private StringBuilder sb = new StringBuilder();
void SwitchFont(Font f) { /* not possible in plain text */ }
void AddText(string s) { sb.Append(s); }
void AddParagraphBreak() { sb.AppendLine(); sb.AppendLine(); }
string GetPlainText() { return sb.ToString(); }
} We can then use this director function and the concrete builder like var builder = new PlainTextBuilder();
CreateExampleText(builder);
var document = builder.GetPlainText(); Note that the StringBuilder itself is also seems to be a builder of this kind: the complete string (= product) is assembled piece by piece by user code. However, a C# StringBuilder is not polymorphic: There is no abstract builder, but only one concrete builder. As such, it doesn't quite fit the GoF design pattern. I discuss this in more detail in my answer to Is “StringBuilder” an application of the Builder Design Pattern? A notable real-world example of the Builder Pattern is the ContentHandler interface in the SAX streaming XML parser . This parser does not build a document model itself, but calls back into a user-provided content handler. This avoids the cost of building a complete document model when the user only needs a simpler data representation. While the user provided content handler is not necessarily a builder, it allows users to inject a document builder of their design into the SAX parser. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/345688",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/249025/"
]
} |
345,805 | I'm fairly new to software engineering, and so as a learning exercise I wrote a chess game. My friend had a look at it and pointed out that my code looked like for (int i = 0; i < 8; i++){
for (int j = 0; j < 8; j++){ while he insisted that it should instead be for (int i = 0; i < CHESS_CONST; i++){
for (int j = 0; j < CHESS_CONST; j++){ with some better symbol name that I can't be bothered to think of right now. Now of course I do know to generally avoid using magic numbers, but I feel like since this number will never change; the name couldn't be that descriptive anyway since the number is used in so many places throughout the code; and anyone going through source code for a chess program should know enough about chess as to know what the 8 is for, there really is no need for a symbolic constant. So what do you guys think? Is this overkill, or should I just go with convention and use a symbol? | IMHO your friend is right in using a symbolic name, though I think the name should definitely be more descriptive (like BOARD_WIDTH instead of CHESS_CONST ). Even when the number will never change through the lifetime of the program, there may be other places in your program where the number 8 will occur with a different meaning. Replacing "8" by BOARD_WIDTH wherever the board width is meant, and using another symbolic name when a different thing is meant makes these different meanings explicit, obvious and your overall program more readable and maintainable. It enables you also to do a global search over your program (or a reverse symbol search, if your environment provides it) in case you need quickly to identify all places in the code which are dependent on the board width. See also this former SE.SE post for a discussion how to (or how not to) pick names for numbers. As a side note, since it was discussed here in the comments : if, in your real program's code, it matters if the variable i refers to rows and j to columns of the board, or vice versa, then it is recommendable to pick variable names which make the distinction clear, like row and col . The benefit of such names is, they make wrong code look wrong. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/345805",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/268029/"
]
} |
346,867 | This may be a convoluted question, but I'm trying to get a better understanding of statelessness. Based on what I've read, web applications should be stateless, meaning each request is treated as an independent transaction. As a result, Session and Cookies should be avoided (as both of them are stateful). A better approach is to use Tokens, which are stateless because nothing is stored on the server. So I'm trying to understand, how can web applications be stateless when there is data that is being kept for my session (such as items in a shopping cart)? Are these actually being stored in a database somewhere and then periodically being purged? How does this work when you are using a token instead of cookies? And then as a related question, are the major websites (Amazon, Google, Facebook, Twitter, etc.) actually stateless? Do they use tokens or cookies (or both)? | "web applications should be stateless" should be understood as "web applications should be stateless unless there is a very good reason to have state" . A "shopping cart" is a stateful feature by design, and denying that is quite counter-productive. The whole point of the shopping cart pattern is to preserve the state of the application between requests. An alternative which I could imagine as a stateless website which implements a shopping cart would be a single-page-application which keeps the shopping cart completely client-sided, retrieves product information with AJAX calls and then sends it to the server all at once when the user does a checkout. But I doubt I have ever seen someone actually do that, because it doesn't allow the user to use multiple browser tabs and doesn't preserve state when they accidentally close the tab. Sure, there are workarounds like using localstorage, but then you do have state again, just on the client instead of on the server. Whenever you have a web application which requires to persist data between pageviews, you usually do that by introducing sessions. The session a request belongs to can be identified by either a cookie or by a URL parameter you add to every link. Cookies should be preferred because they keep your URLs more handy and prevent your user from accidentally sharing an URL with their session-id in it. But having URL tokens as a fallback is also vital for users which deactivate cookies. Most web development frameworks have a session handling system which can do this out-of-the-box. On the server-side, session information is usually stored in a database. Server-side in-memory caching is optional. It can greatly improve response time, but won't allow you to transfer sessions between different servers. So you will need a persistent database as a fallback. are the major websites (Amazon, Google, Facebook, Twitter, etc.) actually stateless? Do they use tokens or cookies (or both)? Do they allow you to log in? When you then close the tab and revisit the site, are you still logged in? If you are, then they are using cookies to preserve your identity between sessions. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/346867",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/-1/"
]
} |
346,993 | Given a very trivial function, int transform(int val) {
return (val + 7) / 8;
} It should be very obvious that it's easy to turn this function into a constexpr function, allowing me to use it when defining constexpr variables, like so: constexpr int transform(int val) {
return (val + 7) / 8;
} My assumption is that this is strictly an improvement, since the function can still be called in a non- constexpr context, and it can now also be used to help define compile-time constant variables. My question is, are there situations where this is a bad idea? Like, by making this function constexpr , can I ever encounter a situation where this function will no longer be usable in a particular circumstance, or where it will misbehave? | This matters only if the function is part of a public interface, and you want to keep future versions of your API binary-compatible. In that case, you have to think carefully how you want to evolve your API, and where you need extension points for future changes. That makes a constexpr qualifier an irrevocable design decision. You cannot remove this qualifier without an incompatible change to your API. It also limits how you can implement that function, e.g. you would not be able to do any logging within this function. Not every trivial function will stay trivial in eternity. That means you should preferably use constexpr for functions that are inherently pure functions, and that would be actually useful at compile time (e.g. for template metaprogramming). It would not be good to make functions constexpr just because the current implementation happens to be constexpr-able. Where compile-time evaluation is not necessary, using inline functions or functions with internal linkage would seem more appropriate that constexpr . All of these variants have in common that the function body is “public” and is available in the same compilation unit as the call location. If the function in question is not part of a stable, public API, this is less of an issue since you can arbitrarily change the design at will. But since you now control all call sites, it is not necessary to mark a function constexpr “just in case”. You know whether you are using this function in a constexpr context. Adding unnecessarily restrictive qualifiers might then be considered obfuscation. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/346993",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/227723/"
]
} |
347,014 | I'm thinking of creating a cron job that checks out code, runs code formatters on it, and if anything changed, commits the changes and pushes them back. Most projects that use autoformatters put them in a git hook, but doing it automatically every few hours would remove the burden for each dev to install the git hook. I would still encourage everyone to write clean, well-formatted code, and maybe I can have the system automatically ping devs when code they wrote gets reformatted, so they know what to do in the future. | Sounds nice, but I would prefer to have people responsible for committing code changes, not bots. Besides, you want to make absolutely sure that those changes do not break anything. For example, we have a rule that orders properties and methods alphabetically. This can have an impact on functionality , for example with the order of data and methods in WSDL files of WCF contracts. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/347014",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/27135/"
]
} |
347,033 | When developing a system or application that you plan to use with a certain framework, is it best practice to design the system without the framework in mind, or is it better to design the system with the mindset "well the framework would have an easier time with this". | Your design should meet the clients needs as closely as they can. Remember that design includes little things like: User experience Functionality How pieces of your application communicate (either with itself or external entities) None of these things should be dictated by the framework. If it's clear that you will be fighting your framework to accomplish these goals, then you choose a new framework that will help you accomplish those goals before you start writing code. Once you've chosen an appropriate toolset (the framework is a tool), then I recommend using the tools the way they are designed to be used. The further you deviate from the framework design the more you increase the learning curve for your team, and the greater chance of something going wrong. In Short Design for your users Pick the appropriate tools to accomplish your design Use your tools the way they are designed to be used Further Thoughts: After 20+ years of software engineering, and using several frameworks, I've learned a couple lessons. All frameworks are a double edged sword: they both constrain and enable. The issue with deciding your framework before you look at the big 3 I mentioned above is that you might be compromising a good user experience for a mediocre (at best) one. Or you might be forced to deviate from the frameworks design to accomplish some specific functionality. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/347033",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/162522/"
]
} |
347,323 | I am currently in a class for software testing where for our semester project, we have to perform multiple types of testing on it, such as unit testing and integration testing. For integration testing, the professor said to use mocks and mocking libraries (like EasyMock and Mockito) for our integration testing. I'm getting fairly confused though. Integration testing is testing outside classes, modules, services, etc. Why would mocks and stubs be proper to use in integration testing if you are testing a multiple classes and services? | If you have a piece of functionality that touches several external components, you might mock all but one to isolate and test a specific component. For example, suppose you have a function that calls a web service and then does something with a database based on the results. You could write three integration tests: a test that mocks the webservice call but involves real database connectivity. a test that makes a real webservice call but uses mock database connectivity. a test that makes a real webservice call and involves a real database connection. If you run all three tests and 1 and 3 fail, there's a good chance that there might be a bug in your code that works with the database, since the only test that passed was the one using the mock database connectivity. In general, integration tests don't use mocks, but I have done something like this on occasion. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/347323",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/269929/"
]
} |
347,328 | The " Reinvent the wheel " antipattern is a pretty common one - instead of using a ready solution, write your own from scratch. Code base grows needlessly, slightly different interfaces that do the same thing but slightly differently abound, time is wasted to write (and debug!) functions that are readily available. We all know this. But there's something on the opposite end of the spectrum. When instead of writing your own function that's two lines of code, you import a framework/API/library, instantiate it, configure, convert context to datatype as acceptable by the framework, then call that one single function that does exactly what you need, two lines of business logic under a gigabyte of abstraction layers. And then you need to keep the library up to date, manage build dependencies, keep the licenses in sync, and your code of instantiation of it is ten times longer and more complex than if you just "reinvented the wheel". The reasons may be varied: management strictly opposing "reinvention of the wheel" no matter the cost, someone pushing their favored technology despite marginal overlap with the requirements, a dwindling role of a formerly major module of the system, or expectation of expansion and broader use of the framework, which just never arrives, or just misunderstanding the "weight" a couple of import/include/load instructions carry "behind the scenes". Is there a common name for this sort of antipattern? (I'm not trying to start a discussion when it's right or wrong, or if it's a real antipattern or anything opinion based , this is a simple straightforward and objective nomenclature question.) Edit: the suggested "duplicate" talks about overengineering own code to make it "ready for everything", completely apart from external systems. This thing might in certain cases stem from it, but generally it originates from "aversion to reinventing the wheel" - code reuse at all cost; if a "ready-made" solution to our problem exists, we will use it, no matter how poorly it fits and at what cost it comes. Dogmatically favoring creation of new dependencies over code duplication, with total disregard for costs of integration and maintenance of these dependencies when compared to cost of creation and maintenance of the new code. | Golden Hammer The golden hammer is a tool chosen only because it is fancy. It is neither cost-effective nor efficient at performing the intended task. source: xkcd 801 (Despite the down-votes, I stand by this answer. It might not exactly be the opposite of re-inventing the wheel semantically, but It fits every example mentioned in the question) | {
"source": [
"https://softwareengineering.stackexchange.com/questions/347328",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/30296/"
]
} |
347,411 | In this 2003 article by Stephen Figgins on linuxdevcenter.com , Bram Cohen's BitTorrent is described as using the "Fix Everything" design pattern. A less common approach that both makes BitTorrent harder to grasp, but worthy of study, is Cohen's use of idempotence. A process is idempotent when applying it more than once causes no further changes. Cohen says he uses a design pattern he calls "Fix Everything," a function that can react to a number of changes without really noting what all it might change. He explains, "you note the event that happened, then call the fix everything function which is written in this very idempotent manner, and just cleans up whatever might be going on and recalculates it all from scratch." While idempotence makes some difficult calculations easier, it makes things a little convoluted. It's not always clear what a call is going to change, if anything. You don't need to know in advance. You are free to call the function, just to be on the safe side. This sounds quite nice on the face of it. However, it seems to me that calling an idempotent "fix everything" function would improve robustness of the system at the cost of efficiency and potentially screwing up the containing system (that might prefer processes that carefully plan and execute.). I can't say that I've used it before, though. I also cannot find the source for his application online (but I did find this one that claims to be based on it.). Nor can I find reference to it outside of this article (and I consider my google-fu to be pretty good) but I did find an entry for "Idempotent Capability" on SOApatterns.org . Is this idea better known by another name? What is the "Fix Everything" design pattern? What are its pros and cons? | Let's say you have an HTML page that is fairly complicated-- if you pick something in one dropdown, another control might appear, or the values in a third control might change. There's two ways you could approach this: Write a separate handler, for each and every control, that respond to events on that control, and updates other controls as needed. Write a single handler that looks at the state of all the controls on the page and just fixes everything . The second call is "idempotent" because you can call it over and over again and the controls will always be arranged properly. Whereas the first call(s) may have issues if a call is lost or repeated, e.g. if one of the handlers performs a toggle. The logic for the second call would be a bit more obscure, but you only have to write one handler. And you can always use both solutions, calling the "fix everything" function as needed "just to be on the safe side." The second approach is especially nice when state can come from different sources, e.g. from user input versus rendered from the server. In ASP.NET, the technique plays very well with the concept of postback because you just run the fix everything function whenever you render the page. Now that I've mentioned events being lost or repeated, and getting state from different sources, I'm thinking it is obvious how this approach maps well to a problem space like BitTorrent's. Cons? Well the obvious con is that there is a performance hit because it is less efficient to go over everything all the time. But a solution like BitTorrent is optimized to scale out, not scale up, so it's good for that sort of thing. Depending on the problem you are trying to solve, it might not be suitable for you. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/347411",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/102438/"
]
} |
347,492 | So I am working on a software design using C for a certain processor. The tool-kit includes the ability to compile C as well as C++. For what I am doing, there is no dynamic memory allocation available in this environment and the program is overall fairly simple. Not to mention that the device has almost no processor power or resources. There's really no strong need to use any C++ whatsoever. That being said, there are a few places where I do function overloading (a feature of C++). I need to send a few different types of data and don't feel like using printf style formatting with some kind of %s (or whatever) argument. I've seen some people that didn't have access to a C++ compiler doing the printf thing, but in my case C++ support is available. Now I'm sure I might get the question of why I need to overload a function to begin with. So I'll try to answer that right now. I need to transmit different types of data out a serial port so I have a few overloads that transmit the following data types: unsigned char*
const char*
unsigned char
const char I'd just prefer not to have one method that handles all of these things. When I call on the function I just want it to transmit out the serial port, I don't have a lot of resources so I don't want to do barely ANYTHING but my transmission. Someone else saw my program and asked me, "why are you using CPP files?" So, that's my only reason. Is that bad practice? Update I'd like to address some questions asked: An objective answer to your dilemma will depend on: Whether the size of the executable grows significantly if you use C++. As of right now the size of the executable consumes 4.0% of program memory (out of 5248 bytes) and 8.3% of data memory (out of 342 bytes). That is, compiling for C++... I don't know what it would look like for C because I haven't been using the C compiler. I do know that this program will not grow any more, so for how limited the resources are I'd say I'm okay there... Whether there is any noticeable negative impact on performance if you
use C++. Well if there is, I haven't noticed anything... but then again that could be why I'm asking this question since I don't fully understand. Whether the code might be reused on a different platform where only a
C compiler is available. I know that the answer to this is definitely no . We are actually considering moving to a different processor, but only more powerful ARM-based processors (all of which I know for a fact have C++ compiler tool-chains). | I wouldn't go so far as to call it "bad practice" per se , but neither am I convinced it's really the right solution to your problem. If all you want is four separate functions to do your four data types, why do not what C programmers have done since time immemorial: void transmit_uchar_buffer(unsigned char *buffer);
void transmit_char_buffer(char *buffer);
void transmit_uchar(unsigned char c);
void transmit_char(char c); That's effectively what the C++ compiler is doing behind the scenes anyway and it's not that big of an overhead for the programmer. Avoids all the problems of "why are you writing not-quite-C with a C++ compiler", and means nobody else on your project is going to be confused by which bits of C++ are "allowed" and which bits aren't. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/347492",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/219569/"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.