source_id
int64 1
4.64M
| question
stringlengths 0
28.4k
| response
stringlengths 0
28.8k
| metadata
dict |
---|---|---|---|
312,425 | Is the visibility private of class fields/properties/attributes useful? In OOP, sooner or later, you are going to make a subclass of a class and in that case, it is good to understand and be able to modify the implementation completely. One of the first things I do when I subclass a class is to change a bunch of private methods to protected . However, hiding details from the outer world is important – so we need protected too and not just public . My question is: Do you know about an important use case where private instead of protected is a good tool, or would two options " protected & public " be enough for OOP languages? | Because as you say, protected still leaves you with the ability to "modify the implementation completely". It doesn't genuinely protect anything inside the class. Why do we care about "genuinely protecting" the stuff inside the class? Because otherwise it would be impossible to change implementation details without breaking client code . Put another way, people who write subclasses are also "the outer world" for the person who wrote the original base class. In practice, protected members are essentially a class' "public API for subclasses" and need to remain stable and backwards compatible just as much as public members do. If we did not have the ability to create true private members, then nothing in an implementation would ever be safe to change, because you wouldn't be able to rule out the possibility that (non-malicious) client code has somehow managed to depend on it. Incidentally, while "In OOP, sooner or later, you are going to make a subclass of a class" is technically true, your argument seems to be making the much stronger assumption that "sooner or later, you are going to make a subclass of every class" which is almost certainly not the case. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/312425",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/220013/"
]
} |
312,445 | In Google's MNist tutorial using TensorFlow , a calculation is exhibited in which one step is equivalent to multiplying a matrix by a vector. Google first shows a picture in which each numeric multiplication and addition that would go into performing the calculation is written out in full. Next, they show a picture in which it is instead expressed as a matrix multiplication, claiming that this version of the calculation is, or at least might be, faster: If we write that out as equations, we get: We can "vectorize" this procedure, turning it into a matrix multiplication and vector addition. This is helpful for computational efficiency. (It's also a useful way to think.) I know that equations like this are usually written in the matrix multiplication format by machine learning practitioners, and can of course see advantages to doing so from the standpoints of code terseness or of understanding the mathematics. What I don't understand is Google's claim that converting from the longhand form to the matrix form "is helpful for computational efficiency" When, why, and how would it be possible to gain performance improvements in software by expressing calculations as matrix multiplications? If I were to calculate the matrix multiplication in the second (matrix-based) image myself, as a human, I'd do it by sequentially doing each of the distinct calculations shown in the first (scalar) image. To me, they are nothing but two notations for the same sequence of calculations. Why is it different for my computer? Why would a computer be able to perform the matrix calculation faster than the scalar one? | This may sound obvious, but computers don't execute formulas , they execute code , and how long that execution takes depends directly on the code they execute and only indirectly on whatever concept that code implements. Two logically identical pieces of code can have very different performance characteristics. Some reasons that are likely to crop up in matrix multiplication specifically: Using multiple threads. There is almost no modern CPU that doesn't have multiple cores, many have up to 8, and specialized machines for high-performance computing can easily have 64 across several sockets. Writing code in the obvious way, in a normal programming language, uses only one of those. In other words, it may use less than 2% of the available computing resources of the machine it's running on. Using SIMD instructions (confusingly, this is also called "vectorization" but in a different sense than in the text quotes in the question). In essence, instead of 4 or 8 or so scalar arithmetic instructions, give the CPU one instruction that performs arithmetic on 4 or 8 or so registers in parallel. This can literally make some calculations (when they're a perfectly independent and fit for the instruction set) 4 or 8 times faster. Making smarter use of the cache . Memory access are faster if they are temporally and spatially coherent , that is, consecutive accesses are to nearby addresses and when accessing an address twice you access it twice in quick succession rather than with a long pause. Using accelerators such as GPUs. These devices are very different beasts from CPUs and programming them efficiently is an whole art form of its own. For example, they have hundreds of cores, which are grouped into groups of a few dozen cores, and these groups share resources — they share a few KiB of memory that is much faster than normal memory, and when any core of the group executes an if statement all the others in that group have to wait for it. Distribute the work over several machines (very important in supercomputers!) which introduces a huge set of new headaches but can, of course, give access to vastly greater computing resources. Smarter algorithms. For matrix multiplication the simple O(n^3) algorithm, properly optimized with the tricks above, are often faster than the sub-cubic ones for reasonable matrix sizes, but sometimes they win. For special cases such as sparse matrices, you can write specialized algorithms. A lot of smart people have written very efficient code for common linear algebra operations , using the above tricks and many more and usually even with stupid platform-specific tricks. Therefore, transforming your formula into a matrix multiplication and then implementing that calculation by calling into a mature linear algebra library benefits from that optimization effort. By contrast, if you simply write the formula out in the obvious way in a high-level language, the machine code that is eventually generated won't use all of those tricks and won't be as fast. This is also true if you take the matrix formulation and implement it by calling a naive matrix multiplication routine that you wrote yourself (again, in the obvious way). Making code fast takes work , and often quite a lot of work if you want that last ounce of performance. Because so many important calculations can be expressed as combination of a couple of linear algebra operations, it's economical to create highly optimized code for these operations. Your one-off specialized use case, though? Nobody cares about that except you, so optimizing the heck out of it is not economical. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/312445",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/74149/"
]
} |
312,488 | I sent the code for a job application and got the following review: Regarding the Project Structure : Physical separation is absent.
Logical separation is present but not up to best practices. Things
seem to be separated "by the order they were needed to the solution”. Regarding Coding Conventions : White spacing is inconsistent and
outside of conventions, disregard for immutability, explicit referral
to "self", etc. Regarding readme documentation practices : Readme is not up to our
best practices regarding usage, assumptions or an explanation of how
the solution was setup. Regarding maintainability, extensibility, scalability or performance
Architecture : No signs of architecture. App can be described as a
massive view controller consuming services from a bunch of helper
classes. Regarding DRY and SOLID principles : There's duplicated code in
several places and in terms of SOLID principles, ViewController holds
several responsibilities. God observations: Regarding the Use of a package manager for dependencies : Cocoapods was
used. Error handling and logging : There was concern with error handling and
logging in some degree. Can someone give me a detailed explanation of each issue presented by the reviewer? In case someone is curious about my response to the challenge: Github Link Sorry for the long text but I think I'm helping other people by sharing my experience. | Tackling each of these at a time: Regarding the Project Structure This one can be subjective at the best of times, but projects are typically structured by logically dividing your code into sub-folders or sub-projects based on their general area of responsibility, or which 'layers' they belong to. For example, Models, Views, Controllers, Core Application Logic, Shared/Common components, etc. Regarding Coding Conventions : White spacing is inconsistent and outside of conventions, disregard for immutability, explicit referral to "self", etc. When writing code it is good to adhere strictly to a coding standard. For example: https://github.com/raywenderlich/swift-style-guide Where coding standards are concerned, one of the most important things is consistency . Inconsistent code is not nice to work with; it just looks sloppy and unprofessional. Immutability is explained here: https://en.wikipedia.org/wiki/Immutable_object Regarding readme documentation practices It's not clear to me what the reviewer wanted in the readme, but your readme seems to be virtually empty. Presumably the reviewer expected some user-level documentation written in some user-friendly plain english language (e.g. how to use the app for the first time, what it is, what it does, how it works, etc.) Regarding maintainability, extensibility, scalability or performance Architecture In many ways, this crosses-over with the point about "SOLID" and "DRY" principles. But also indicates that your solution is lacking in logical separation of different "layers". In other words, the reviewer thought you had created a "Big Ball of Mud" . It's common for applications and systems to be comprised of several layers, which are each cleanly separated from each other; for example: Data Layer (i.e. the Data Access Layer which works with persistent data) Business Logic Layer (Core domain logic - e.g. the logic which actually handles all the requests, processes all the data, calculates results, etc.). Application Layer (Application-specific logic - e.g. creating a request based on User input to call some function in the Business Logic layer). Presentation Layer (View logic - e.g. MVC pattern; handling user interactivity, layout, presentation of data, etc.). The reviewer mentioned some specific words: Maintainability - for code to be maintainable it needs to be easy for someone to read, understand and follow. Classes should be loosely coupled. Functions should have low cyclomatic complexity. It needs to be easily testible, and have as much unit test coverage as possible. Again, this ties in with SOLID/DRY. Extensibility - means the code should be designed in such a way whereby adding new functionality does not involve diving in to change a lot of existing classes/functions. SOLID: Open/Closed Principle Scalability/Performance - presumably the reviewer considered that your code would not scale up well under heavy use. Regarding DRY and SOLID principles These are common software design principles which are worth spending time learning, and making sure you have a clear understanding: SOLID - https://en.wikipedia.org/wiki/SOLID_(object-oriented_design) DRY - https://en.wikipedia.org/wiki/Don%27t_repeat_yourself Whoever reviewed your code clearly identified some violations in the solution you submitted. Typical "hallmarks" of code which violates these principles includes (but is not limited to) Classes doing too many things (i.e. having too many responsibilities) Long functions (somewhat subjective, but many people consider anything over 30 lines to be "too long") Classes with too many functions logic which is repeated in several places and could be rationalised down to a single class and/or function I had a quick look at your code, and your ViewController definitely violates these principles, particularly your viewDidLoad() method, which seems to be doing far too many things. Overall the impression I get (and which I expect the reviewer had) from looking at your code, is that there's not really any evidence that you have much experience in dealing with complexity or working with code written in a team of developers. Most likely their main concern about you based on the challenge is that you wouldn't be able to take a project which is many times bigger than this challenge, with a team of people, and be able to break it down into layers and modules, or to structure the code in a way which other developers could work with. But as a learning experience, it seems like you got some good, valuable feedback; your next step might be to take the solution that you've got, try to structure it properly with clean separation between your different layers, and use the project to learn how to apply SOLID principles. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/312488",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/82775/"
]
} |
312,511 | I am a hobbyist programmer and a beginner. Most of the time, I cannot solve the problem while sitting in front of the computer. For example, I was trying to find out if one number is a power of another. I couldn't figure out the solution until I grabbed a pen and a paper then analyzed the problem. In roughly 3 minutes I solved it and wrote the script in Python. Sometimes I can solve the problem while sitting in front of a computer, but with some struggle. Is that ok? | I tend to solve my most difficult problems: In front of a whiteboard (sometimes without even drawing anything - just thinking about how to visualize a problem can sometimes lead to a solution) While explaining them to colleagues Looking out of the window While taking a walk Under the shower On the toilet Going away from the monitor is often very helpful for concentrating on the problem itself and not just on typing out an implementation. The problem solving happens in your head. Typing in the program code is just how you explain your solution to the computer. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/312511",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/213862/"
]
} |
312,568 | Ok, I know the title of this question is almost identical to When should I use event based programming? but the answers of said question have not helped me in deciding whether I should use events in the particular case I'm facing. I'm developing a small application. It's a simple app, and for the most part its functionality is basic CRUD. Upon certain events (when modifying certain data) the application must write a local copy of said data in a file. I'm not sure about what's the best way to implement this. I can: Fire events when the data is modified and bind a response (generate the file) to such events. Alternatively, implement the observer pattern. That seems like unnecessary complexity. Call the file-generating code directly from the code that modifies the data. Much simpler, but it seems wrong that the dependency should be this way, that is, it seems wrong that the core functionality of the app (code that modifies data) should be coupled to that extra perk (code that generates a backup file). I know, however, that this app will not evolve to a point at which that coupling poses a problem. What's the best approach in this case? | Follow the KISS principle: Keep It Simple, Stupid, or the YAGNI principle: You Ain't Going to Need It. You can write the code like: void updateSpecialData() {
// do the update.
backupData();
} Or you can write code like: void updateSpecialData() {
// do the update.
emit SpecialDataUpdated();
}
void SpecialDataUpdatedHandler() {
backupData();
}
void configureEventHandlers() {
connect(SpecialDataUpdate, SpecialDataUpdatedHandler);
} In the absence of a compelling reason to do otherwise, follow the simpler route. Techniques like event handling are powerful, but they increase the complexity of your code. It requires more code to get working, and it makes what happens in your code harder to follow. Events are very critical in the right situation (imagine trying to do UI programming without events!) But don't use them when you can KISS or YAGNI instead. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/312568",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/129041/"
]
} |
312,584 | Seeing as how there are no distinct Unmodifiable Collections interfaces, aren't you just setting yourself up for runtime exceptions by returning Unmodifiable Collections from method invocations? example: public class Start {
public static void main(String[] args) {
try {
List<String> listA = getListA();
listA.add("whatever");
System.out.println("listA: " + listA.get(0));
List<String> listB = getListB();
listB.add("whatever");
} catch(Exception e) {
System.out.println("SURPRISE!!! you should have known that listB was unmodifible.");
}
}
static List<String> getListA() {
return new ArrayList();
}
static List<String> getListB() {
return Collections.unmodifiableList(new ArrayList());
}
} At best, in javadoc I could write in bold letters " the returned Collection is Unmodifiable !"? In what scenario is returning Unmodifiable Collections worth the risk of runtime exceptions? | Follow the KISS principle: Keep It Simple, Stupid, or the YAGNI principle: You Ain't Going to Need It. You can write the code like: void updateSpecialData() {
// do the update.
backupData();
} Or you can write code like: void updateSpecialData() {
// do the update.
emit SpecialDataUpdated();
}
void SpecialDataUpdatedHandler() {
backupData();
}
void configureEventHandlers() {
connect(SpecialDataUpdate, SpecialDataUpdatedHandler);
} In the absence of a compelling reason to do otherwise, follow the simpler route. Techniques like event handling are powerful, but they increase the complexity of your code. It requires more code to get working, and it makes what happens in your code harder to follow. Events are very critical in the right situation (imagine trying to do UI programming without events!) But don't use them when you can KISS or YAGNI instead. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/312584",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/215076/"
]
} |
312,600 | In the world of pirates stealing software, one common technique is to download a trial or demo, patch it and upload this as the stolen version of a piece of software or game. This makes me think: if I sell software one day, it would be possible to set up a custom server, that builds a specific package, personalised to the user, when he downloads a trial. For example, the binary would contain identification numbers and checksums (like a signature using my private key), etc. This way, if somebody cracks a trail of some software of mine, and uploads that, I could download that, and extract the identification of it. Would that be of some juridical significance to sue the cracker? Compare it to somebody that would buy music on iTunes, and be stupid enough to upload it with his name still in the files. The only difference is that I would try to hide this information, so it's not that easy to spot. Such information is most likely to remain in the binary, since this has nothing to do with the trail vs full version code in the binary. Or in the least significant bit of the red channel of a PNG image included in the software, for example. A cracker would most likely never notice this, unless he downloads the software multiple times, and bindiffs the packages. If he does that, he could throw try to throw out the information (and optionally a check procedure in the binary that verifies if the data is still there and valid). Then the cracker would have successfully removed the identification information. I feel like this is an interesting thing to work on and be developing such a thing, but if this isn't going to help me in any way to protect my software, or sue the crackers, then it's probably not worth all the effort. | Follow the KISS principle: Keep It Simple, Stupid, or the YAGNI principle: You Ain't Going to Need It. You can write the code like: void updateSpecialData() {
// do the update.
backupData();
} Or you can write code like: void updateSpecialData() {
// do the update.
emit SpecialDataUpdated();
}
void SpecialDataUpdatedHandler() {
backupData();
}
void configureEventHandlers() {
connect(SpecialDataUpdate, SpecialDataUpdatedHandler);
} In the absence of a compelling reason to do otherwise, follow the simpler route. Techniques like event handling are powerful, but they increase the complexity of your code. It requires more code to get working, and it makes what happens in your code harder to follow. Events are very critical in the right situation (imagine trying to do UI programming without events!) But don't use them when you can KISS or YAGNI instead. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/312600",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/4801/"
]
} |
312,724 | Background I currently have a situation where I have an object that is both transmitted and received by a device. This message has several constructs, as follows: public void ReverseData()
public void ScheduleTransmission() The ScheduleTransmission method needs to call the ReverseData method whenever it is called. However, there are times where I will need to call ReverseData externally (and I should add outside the namespace entirely ) from where the object is instantiated in the application. As for the "receive" I mean that ReverseData will be called externally in an object_received event-handler to un-reverse the data. Question Is it generally acceptable for an object to call its own public methods? | I would say it's not only acceptable but encouraged especially if you plan to allow extensions. In order to support extensions to the class in C#, you would need to flag the method as virtual per the comments below. You might want to document this, however, so that someone isn't surprised when overriding ReverseData() changes the way ScheduleTransmission() works. It really comes down to the design of the class. ReverseData() sounds like a fundamental behavior of your class. If you need to use this behavior in other places, you probably don't want to have other versions of it. You just need to be careful that you don't let details specific to ScheduleTransmission() leak into ReverseData(). That will create problems. But since you are already using this outside of the class, you probably have already thought that through. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/312724",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/219569/"
]
} |
312,781 | My IDE ( NetBeans ) type checks my Collections while I am typing code. But then, why do I have to cast the returned object of Object.clone() ? Which is fine. No harm no foul. But still, I don't understand. Is type checking, without casting, the returned object of Object.clone() not possible? The generics framework makes me think the IDE could check the type of object references on the right-side of the " = " mark without casting while I am typing? I don't get it. addendum My usage case was just that I had a private Calendar field, pubdate . I was going to write: Calendar getPubdate() {
return pubdate;
} but there is a risk that the invoker could modify my pubdate , so I returned a copy: Calendar getPubdate() {
return (Calendar) pubdate.clone();
} Then, I wondered why I needed to cast pubdate.clone() . The method signature has the type right there. NetBeans should be able to figure that one out. And NetBeans seemed to be doing something similar with regard to Collections . | why do I have to cast the returned object of Object.clone() ? Because it returns Object . The generics framework makes me think the IDE could check the type of object references on the right-side of the " = " mark without casting while I am typing? I don't get it. Object.clone is not generic. If generics had existed when clone was designed, it probably would have looked like this (using F-Bounded Polymorphism): interface Cloneable<T extends Cloneable<T>> {
T clone();
} If Java had a MyType feature, it would maybe look like this: interface Cloneable {
this clone();
} But Generics didn't exist when Object.clone was designed, and Java doesn't have MyTypes, so the type-unsafe version of Object.clone is what we have to work with. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/312781",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/215076/"
]
} |
312,983 | Lately I've been trying to wrap my mind about the following fact. On one hand, there is a host of coding guidelines and standards for what is considered to be "healthy", "clean", "well-written" and so on code. See the "Clean Code" that appears to be widely discussed here as well. Example rule: 7 line long methods and 1 or 2 levels of indentation. Code that doesn't follow is somehow expected to die of poor maintainability. On the other hand, I get to work with OpenCV, OpenCascade, VTK, etc. It's scientific code. They have 2 page long methods (sen myself), OpenCascade has a method or a class split into 10 files (no jokes here), VTK is a mess at times, too. Yet these projects prosper, are maintained and widely used! Where's the catch? Are we allowed to write scientific, math-heavy code in a way that it just works, and we can maintain it? Is thee a separate set of standards for such projects, if any? Might be a naive question, but I'm in what seems to be a programming void trying to build up set of rules how to do and not to do things, which is the way I've been taught to work at high school. Ever since I graduated, I've had almost no guideline support with the things I've had to do, mainly programming - no-one bothers to teach that. | Is scientific code a different enough realm to ignore common coding standards? No, it's not. Research code is often "throw away" and written by people who are not developers by background, however strong their academic credentials. Some of the research code I wrote would make current me cry . But it worked! One thing to consider is the gatekeepers to projects drive what gets included. If a large project started as an academic/research code project, ends up working, and is now a mess, someone has to take the initiative to refactor it. It takes a lot of work to refactor existing code that is not causing problems. Especially if it is at all domain specific or does not have tests. You will see that OpenCV has a style guide that is very comprehensive, even if not perfect. Applying this retroactively to all existing code? That is.. not for the faint of heart. This is even more difficult if all that code works. Because it's not broken. Why fix it? Yet these projects prosper, are maintained and widely used! This is the answer, in a sense. Working code is still useful and so it is more likely to be maintained. It might be a mess, especially initially. Some of these projects probably started as a 1-off project that "would not need to be reusued ever and could be thrown away." Also consider that if you are implementing a complex algorithm it may make more sense to have larger methods because you (and others familiar with the scientific side) can conceptually understand the algorithm better. My thesis work was related to optimization. Having the main algorithm logic as one method was considerably easier to understand than it would have been trying to break it apart. It certainly violated the "7 lines per method" rule but it also meant that another researcher could look at my code and more quickly understand my modifications to the algorithm. If this implementation was abstracted away and designed well, this transparency would be lost to non programmers . To fellow answerers: This question refers to the code base of open-source libraries for computationally intensive tasks in one or more scientific domains. This question is not about throwaway code. Please pause for a moment to make sure you grasp every highlighted aspect before writing an answer. I think people often have this idea that all open source projects start as, "hey I have a great idea for a library that will be wildly popular and used by thousands/millions of others" and then every project happens like that. Reality is that many projects are started and die. A ridiculously tiny percentage of projects "make it" to the level of OpenCV or VTK etc. OpenCV started as a research project from Intel. Wikipedia describes it as being part of a "series of projects." Its first non-beta release was 2006, or seven years after it was first started. I suspect that the goal initially was meaningful beta releases, not perfect code. Additionally, the "ownership" of OpenCV has changed significantly. This makes standards change, unless all responsible parties adopt the exact same standards and keep them for the duration of the project. I also should point out that OpenCV was around for several years before the Agile Manifesto that Clean Code derives inspiration from was published (and VTK almost 10). VTK was started 17 years prior to the publishing of Clean Code (OpenCV was "only" 9 years prior). | {
"source": [
"https://softwareengineering.stackexchange.com/questions/312983",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/220546/"
]
} |
313,254 | I have some problems understanding the concept of a runtime library , especially the Python one.
So I have written some a hello world python program and intend to execute it, so I write python ./hello_world.py . What steps happens between me hitting the Enter button and the machine code generated from my python code being executed on my CPU? And how does this relate to the Python runtime system and/or library? | For as diverse as they are, there are a handful of common concepts that all serious, modern programming languages share. Two of them are the core of the answer for your questions above. What steps happens between me hitting the Enter button and the machine code generated from my python code being executed on my CPU? The code gets parsed, analyzed, and fed into an interpreter. This is all about a very important area of computer science known as compiler theory . A compiler is a program that translates code from one language (your source code) to another language (typically machine code, though "transpilers" that translate from one high-level language to another do exist). This is a really massive topic that you could spend years researching, but here's the basic version: The compiler begins with a parser , a routine that reads your source code and applies the syntax rules of the language to it to figure out whether it makes sense as valid Python (in your case) code. If it doesn't, the parser will throw an error and the compiler bails out, but if it does, the parser outputs what's known as an Abstract Syntax Tree, or AST for short. The AST is a tree data structure whose nodes each contain an element of the syntax. For example, if you say x = 5 , you could end up with a BinaryExpression node with an operator value of = , a Left value of ReferenceExpression(x) and a Right value of IntegerLiteralExpression(5) . Your whole program can be represented by a big tree like this. Once the parser produces an AST, the second phase is semantic analysis . In plain English, this means "figure out what this AST means." It checks the AST to determine whether you did anything that's illegal even though it's a valid parse, (for example, trying to call a 1-argument function with 3 arguments,) and raises errors if you do. Otherwise, it analyzes the AST and performs edits to it to make it simpler for a machine to understand. The third phase is code generation. Once you have a fully analyzed, simplified, valid AST, you feed it into the generator, which walks the AST and produces code in the output language. This is your finished product. With Python, it uses an interpreter rather than a compiler. An interpreter works in exactly the same way as a compiler, with one difference: instead of code generation, it loads the output in-memory and executes it directly on your system. (The exact details of how this happens can vary wildly between different languages and different interpreters.) And how does this relate to the Python runtime system and/or library? All but the very simplest languages come with a set of predefined functions that are important to a large percentage of users and would be difficult for the users to implement on their own for one reason or another. Their code can call into these functions without needing any third-party libraries. (For example, in Python you have print , which sends output to stdout . Good luck implementing that on your own!) This set of functions is generally collected in a shared library that the code can call into at run-time, which is why it's known as the language runtime library, or simply "the runtime" for short. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/313254",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/217255/"
]
} |
313,324 | In most coding languages (if not all) you need to declare variables. For example in C# if its a number field then int PhoneNumber If I'm using normal English language I do not need to declare PhoneNumber as int to use it. For example if I ask my friend Sam to give me his phone number I say: "Sam give me the phonenumber" I wouldn't say "Char(20) Sam give me the int phoneNumber" Why do we have to specify data type at all? | When you use natural language to refer to information, it is not very precise, and in particular, doesn't communicate to others much about your intent. Similar problems occur when trying to do math in natural language: it just isn't precise enough. Programming is complex; errors are all too easy to come by. Types are part of a system of checks that are designed to prevent illegal program states, by detecting error conditions. Different languages use types differently: some languages use types heavily to detect errors at compile time. Almost all languages have some notion of incompatible types as a runtime error. Usually a type error indicates a bug of some sort in the program. When we allow programs to continue despite errors, we likely get very bad answers. We prefer to stop the program rather than get bad or incorrect answers. Put another way, types express constraints on the behavior of the program. Constraints, when enforced by some mechanism, provide guarantees. Such guarantees limit the amount of reasoning necessary to think about the program, thus simplifying the task of reading and maintaining the program for programmers. Without types, and their implication of tools (i.e. the compiler) that detect type errors, the programming burden is considerably higher, and thus, more costly. It is true that (many) humans easily distinguish between a European, United States, and international phone number. However, the computer doesn't really "think", and would, if told, dial a united states phone number in europe, or vice versa. Types, for example, are a good way to distinguish between these cases, without having to teach the computer how to "think". In some languages, we can get a compile time error for trying to mix a European phone number on an American phone system. That error tells us we need to modify our program (perhaps by converting the phone number to an international dialing sequence, or, by using the phone number in Europe instead), before we even attempt to run the program. Further, as the computer doesn't think, the name of the field or variable (e.g. phonenumber ), means nothing to the computer. To the computer, that field/variable name is just "blah123". Think about how your program would be if all variables were "blahxxx". Yikes. Well, that's what the computer sees. Providing a type gives the computer a inkling into the meaning of the variable that it simply cannot infer from its name alone. Further, as @Robert says, in many modern languages we don't have to specify types as much as we did in the old days, as languages like C# perform "type inference", which is a set of rules to determine the proper type for a variable in context. C# only provides for type inference on local variables, but not on formal parameters, or class or instance fields. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/313324",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/221254/"
]
} |
313,493 | I mean it in this way: <?php
$number1 = 5; // (Type 'Int')
$operator1 = +; // (Type non-existent 'Operator')
$number2 = 5; // (Type 'Int')
$operator2 = *; // (Type non-existent 'Operator')
$number3 = 8; // (Type 'Int')
$test = $number1 $operator1 $number2 $operator2 $number3; //5 + 5 * 8.
var_dump($test);
?> But also in this way: <?php
$number1 = 5;
$number3 = 9;
$operator1 = <;
if ($number1 $operator1 $number3) { //5 < 9 (true)
echo 'true';
}
?> It doesn't seem like any languages have this - is there a good reason why they do not? | Operators are just functions under funny names, with some special syntax around. In many languages, as varied as C++ and Python, you can redefine operators by overriding special methods of your class. Then standard operators (e.g. + ) work according to the logic you supply (e.g. concatenating strings or adding matrices or whatever). Since such operator-defining functions are just methods, you can pass them around as you would a function: # python
action = int.__add__
result = action(3, 5)
assert result == 8 Other languages allows you to directly define new operators as functions, and use them in infix form. -- haskell
plus a b = a + b -- a normal function
3 `plus` 5 == 8 -- True
(+++) a b = a + b -- a funny name made of non-letters
3 +++ 5 == 8 -- True
let action = (+)
1 `action` 3 == 4 -- True Unfortunately, I'm not sure if PHP supports anything like that, and if supporting it would be a good thing. Use a plain function, it's more readable than $foo $operator $bar . | {
"source": [
"https://softwareengineering.stackexchange.com/questions/313493",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/221488/"
]
} |
313,617 | We are a small software company with one product. We use scrum , and our developers choose the features they want to include in each sprint. Unfortunately over the past 18 month period, the team haven't once delivered the features they committed to for a sprint. I've read a lot of posts/answers stating something along the lines of "software is done when it's done, no sooner, no later... it does not help to put pressure on the team, to put more people on it, ..." I've received similar feedback from one of the developers upon my question how we can improve the success rate of sprints. Oh, and yes we do use retrospectives . My question is basically: When is it fair to look for the problem in the quality of the developers? I'm starting to think that if you get to choose your own work/features and still fail each sprint either:
- You are not able to oversee the complexity of your own code;
- or the code is so complex that no one can oversee the complexity. Am I missing something? | Am I missing something? YES! You went 18 months - or somewhere in the neighborhood of 36 sprints with retrospectives, but somehow couldn't fix it? Management didn't hold the team accountable, and then their management didn't hold them accountable for not holding the team accountable? You are missing that your company is pervasively incompetent . So, how to fix it. You (the dev) stop committing to so much work. If the stories are so big that you cannot do that, then you need to break the stories down into smaller chunks. And then you get to hold people accountable for getting done what they say they will get done. If it turns out they can only get a tiny feature done each sprint, then figure out why and make it better (which may involve replacing the developer). If it turns out they can't figure out how to commit to reasonable amounts of work, you fire them . But before that, I would look at management that let things go for that long, and figure out why they're not doing their job. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/313617",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/221650/"
]
} |
313,726 | I am currently developing a new web application based on a rich JavaScript client which communicates with multiple REST web services on my server. That application is intended to be used in at least two country with different languages, so we need to localize it. My question is where should I manage the localization : should the REST services receive request and send answer with localized data, or should the client receive and send generic data and then be responsible to do the localization? | Your REST API will be easier to use by others if you provide string IDs instead of translated strings. Using an API which returns "E_NOT_AUTHORIZED" is more straightforward than if it returns some human-language, and even localized string. Also, you might want to change the localized strings in future versions, which would be a breaking API change. With the string ID approach, you still return "E_NOT_AUTHORIZED" , keeping your API compatible. If you use a framework like Angular.js , it is easy to implement language hot-switching if you use the string ID approach. You just load another stringtable, and all strings automagically change their language because you just use some filter logic in your templates, like {{errorStringID | loc}} . Another consideration: To reduce your server load, keep your back-end as simple as possible. You will be able to serve more clients with the same number of servers. Deliver your stringtables through a CDN, and do the localization in the front-end. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/313726",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/184535/"
]
} |
313,819 | I read an article on BBC. One of the examples they said was that people with surname 'Null' are having problems with entering their details in some websites. No explanation is given about the error they are facing. But as far as I know the string 'Null' and the actual Null value is completely different (from a database point of view). Why would this cause problems in a database? | It doesn't cause database problems. It causes problems in applications written by developers that don't understand databases. At the root of the problem is that much database-related software displays a NULL record as the string NULL . When an application then relies on the string form of a NULL record (likely also using case-insensitive comparison operations), then such an application will consider any "null" string to be NULL. Consequently a name Null would be considered to not exist by that application. The solution is to declare non-null columns as NOT NULL in the database, and to not apply string operations to database records. Most languages have excellent database APIs that make string-level interfaces unnecessary. They should always be preferred, also since they make other mistakes such as SQL injection less likely. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/313819",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/221885/"
]
} |
313,998 | Rightly or wrongly, I'm currently of the belief that I should always try to make my code as robust as possible, even if this means adding in redundant code / checks that I know won't be of any use right now, but they might be x amount of years down the line. For example, I'm currently working on a mobile application that has this piece of code: public static CalendarRow AssignAppointmentToRow(Appointment app, List<CalendarRow> rows)
{
//1. Is rows equal to null? - This will be the case if this is the first appointment.
if (rows == null) {
rows = new List<CalendarRow> ();
}
//2. Is rows empty? - This will be the case if this is the first appointment / some other unknown reason.
if(rows.Count == 0)
{
rows.Add (new CalendarRow (0));
rows [0].Appointments.Add (app);
}
//blah...
} Looking specifically at section two, I know that if section one is true, then section two will also be true. I can't think of any reason as to why section one would be false and section two evaluate true, which makes the second if statement redundant. However, there may be a case in the future where this second if statement is actually needed, and for a known reason. Some people may look at this initially and think I'm programming with the future in-mind, which is obviously a good thing. But I know of a few instances where this kind of code has "hidden" bugs from me. Meaning it's taken me even longer to figure out why function xyz is doing abc when it should actually be doing def . On the other hand, there's also been numerous instances where this kind of code has made it much, much easier to enhance the code with new behaviours, because I don't have to go back and ensure that all the relevant checks are in place. Are there any general rule-of-thumb guidelines for this kind of code?
(I'd also be interested to hear if this would be considered good or bad practise?) N.B: This could be considered similar to this question, however unlike that question, I'd like an answer assuming there are no deadlines. TLDR: Should I go so far as to adding redundant code in order to make it potentially more robust in the future? | As an exercise, first let's verify your logic. Though as we'll see, you have bigger problems than any logical problem. Call the first condition A and the second condition B. You first say: Looking specifically at section two, I know that if section one is true, then section two will also be true. That is: A implies B, or, in more basic terms (NOT A) OR B And then: I can't think of any reason as to why section one would be false and section two evaluate true, which makes the second if statement redundant. That is: NOT((NOT A) AND B) . Apply Demorgan's law to get (NOT B) OR A which is B implies A. Therefore, if both your statements are true then A implies B and B implies A, which means they must be equal. Therefore yes, the checks are redundant. You appear to have four code paths through the program but in fact you only have two. So now the question is: how to write the code? The real question is: what is the stated contract of the method ? The question of whether the conditions are redundant is a red herring. The real question is "have I designed a sensible contract, and does my method clearly implement that contract?" Let's look at the declaration: public static CalendarRow AssignAppointmentToRow(
Appointment app,
List<CalendarRow> rows) It's public, so it has to be robust to bad data from arbitrary callers. It returns a value, so it should be useful for its return value, not for its side effect. And yet the name of the method is a verb, suggesting that it is useful for its side effect. The contract of the list parameter is: A null list is OK A list with one or more elements in it is OK A list with no elements in it is wrong and should not be possible. This contract is insane . Imagine writing the documentation for this! Imagine writing test cases! My advice: start over. This API has candy machine interface written all over it. (The expression is from an old story about the candy machines at Microsoft, where both the prices and the selections are two-digit numbers, and it is super easy to type in "85" which is the price of item 75, and you get the wrong item. Fun fact: yes, I have actually done that by mistake when I was trying to get gum out of a vending machine at Microsoft!) Here's how to design a sensible contract: Make your method useful for either its side effect or its return value, not both. Do not accept mutable types as inputs, like lists; if you need a sequence of information, take an IEnumerable. Only read the sequence; do not write to a passed-in collection unless it is very clear that this is the contract of the method. By taking IEnumerable you send a message to the caller that you are not going to mutate their collection. Never accept nulls; a null sequence is an abomination. Require the caller to pass an empty sequence if that makes sense, never ever null. Crash immediately if the caller violates your contract, to teach them that you mean business, and so that they catch their bugs in testing, not production. Design the contract first to be as sensible as possible, and then clearly implement the contract. That is the way to future-proof a design. Now, I've talked only about your specific case, and you asked a general question. So here is some additional general advice: If there is a fact that you as a developer can deduce but the compiler cannot, then use an assertion to document that fact. If another developer, like future you or one of your coworkers, violates that assumption then the assertion will tell you. Get a test coverage tool. Make sure your tests cover every line of code. If there is uncovered code then either you have a missing test, or you have dead code. Dead code is surprisingly dangerous because usually it is not intended to be dead! The incredibly awful Apple "goto fail" security flaw of a couple years back comes immediately to mind. Get a static analysis tool. Heck, get several; every tool has its own particular specialty and none is a superset of the others. Pay attention to when it is telling you that there is unreachable or redundant code. Again, those are likely bugs. If it sounds like I'm saying: first, design the code well, and second, test the heck out of it to make sure it is correct today, well, that's what I'm saying. Doing those things will make dealing with the future much easier; the hardest part about the future is dealing with all the buggy candy machine code people wrote in the past. Get it right today and the costs will be lower in the future. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/313998",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/143983/"
]
} |
314,043 | Using Object Oriented Programming we have the power to create a class inside a class (a nested class), but I have never created a nested class in my 4 years of coding experience. What are nested classes good for? I know that a class can be marked as private if it is nested and that we can access all private members of that class from the containing class.
We could just put the variables as private in the containing class itself. So why create a nested class? In which scenarios should nested classes be used or are they more powerful in terms of usage over other techniques? | The main feature of nested classes is that they can access private members of the outer class while having the full power of a class itself. Also they can be private which allows for some pretty powerfull encapsulation in certain circumstances: Here we lock the setter completely down to the factory since the class is private no consumer can downcast it and access the setter, and we can control completely what is allowed. public interface IFoo
{
int Foo{get;}
}
public class Factory
{
private class MyFoo : IFoo
{
public int Foo{get;set;}
}
public IFoo CreateFoo(int value) => new MyFoo{Foo = value};
} Other than that it is useful for implementing third-party interfaces in a controlled environment where we can still access private members. If we for example were to provide an instance of some interface to some other object but we don't want our main class to implement it we could let an inner class implement it. public class Outer
{
private int _example;
private class Inner : ISomeInterface
{
Outer _outer;
public Inner(Outer outer){_outer = outer;}
public int DoStuff() => _outer._example;
}
public void DoStuff(){_someDependency.DoBar(new Inner(this)); }
} | {
"source": [
"https://softwareengineering.stackexchange.com/questions/314043",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/218902/"
]
} |
314,066 | I'm designing a RESTful web service using WebApi and was wondering what HTTP responses and response bodies to return when updating / creating objects. For example I can use the POST method to send some JSON to the web service and then create an object. Is it best practice to then set the HTTP status to created (201) or ok (200) and simply return a message such as "New Employee added", or return the object that was sent originally? The same goes for the PUT method. Which HTTP status is best to use and do I need to return the object that was created or just a message? Considering the fact that the user knows what object they are trying to create / update anyway. Any thoughts? Example: Add new Employee: POST /api/employee HTTP/1.1
Host: localhost:8000
Content-Type: application/json
{
"Employee": {
"Name" : "Joe Bloggs",
"Department" : "Finance"
}
} Update existing employee: PUT /api/employee HTTP/1.1
Host: localhost:8000
Content-Type: application/json
{
"Employee": {
"Id" : 1
"Name" : "Joe Bloggs",
"Department" : "IT"
}
} Responses: Response with object created / updated HTTP/1.1 201 Created
Content-Length: 39
Content-Type: application/json; charset=utf-8
Date: Mon, 28 Mar 2016 14:32:39 GMT
{
"Employee": {
"Id" : 1
"Name" : "Joe Bloggs",
"Department" : "IT"
}
} Response with just message: HTTP/1.1 200 OK
Content-Length: 39
Content-Type: application/json; charset=utf-8
Date: Mon, 28 Mar 2016 14:32:39 GMT
{
"Message": "Employee updated"
} Response with just status code: HTTP/1.1 204 No Content
Content-Length: 39
Date: Mon, 28 Mar 2016 14:32:39 GMT | As with most things, it depends. Your tradeoff is ease of use versus network size. It can be very helpful for clients to see the created resource. It may include fields populated by the server, such as last-creation-time. Since you appear to be including the id instead of using hateoas , clients will probably want to see the id for the resource they just POST ed. If you don't include the created resource, please do not create an arbitrary message. The 2xx and Location fields are enough information for clients to be confident that their request was properly handled. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/314066",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/145109/"
]
} |
314,215 | In this documentation it is mentioned A commit object may have any number of parents. But from my understanding, the only case where a commit will have more than 1 parent is when a merge has happened, and in that case there will only be two parents. So my question is, can a commit have more than 2 parents? If so, when? | You can use git merge to merge more than one commit into your current branch. From man git-merge (or git help merge ): git-merge - Join two or more development histories together The result will be a commit with more than two parents when you do that. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/314215",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/22027/"
]
} |
314,406 | I'm trying to understand compilation and interpretation, step by step figuring out a total image. So I came up to a question while reading http://www.cs.man.ac.uk/~pjj/farrell/comp3.html this article It says : The next stage of the compiler is called the Parser. This part of the compiler has an understanding of the language's grammar. It is responsible for identifying syntax errors and for translating an error free program into internal data structures that can be interpreted or written out in another language. But I couldn't figure out how tokenizer can properly tokenize the given stream which has the syntax error. It should be stuck at there or give some wrong information to the parser. I mean isn't the tokenizing also a kind of translator? So how it just overcome the lexical corrupted lines of code while tokenizing. There is an example of token inside above link at The Tokenizer heading. As I understand the form of the token seems like, if there is something wrong in the code token would be corrupted too. Could you please clarify my misunderstanding? | A tokenizer is just a parser optimization. It's perfectly possible to implement a parser without a tokenizer. A tokenizer (or lexer, or scanner) chops the input into a list of tokens. Some parts of the string (comments, whitespace) are usually ignored. Each token has a type (the meaning of this string in the language) and a value (the string that makes up the token). For example, the PHP source snippet $a + $b could be represented by the tokens Variable('$a'),
Plus('+'),
Variable('$b') The tokenizer does not consider whether a token is possible in this context. For example, the input $a $b + + would happily produce the token stream Variable('$a'),
Variable('$b'),
Plus('+'),
Plus('+') When the parser then consumes these tokens, it will notice that two variables cannot follow each other, and neither can two infix operators. (Note that other languages have different syntaxes where such a token stream may be legal, but not in PHP). A parser may still fail at the tokenizer stage. For example, there might be an illegal character: $a × ½ — 3 A PHP tokenizer would be unable to match this input to its rules, and would produce an error before the main parsing starts. More formally, tokenizers are used when each token can be described as a regular language . The tokens can then be matched extremely efficiently, possibly implemented as a DFA. In contrast, the main grammar is usually context-free and requires more complicated, less performant parsing algorithm such as LALR. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/314406",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/222750/"
]
} |
314,490 | I'm a student of systems engineering, and all my teachers and friends (that actually work in the area) say that it is better to have as much logic as possible implemented in the database (queries, views, triggers, T-SQL , etc.). I think that it's better to have it in the code. Their reasons are: If they need to change the language, almost all logic will be in the
database; therefore the time of implementation will be minimal. Changes in the language are more common than in the database. My reasons are: It is obvious (in the current environment of my country at least) that they do not change the language of the projects that "easily." (I've seen programs that are still in FoxPro , because if it works, there is no need to change it). Programming languages are about functionality, while databases are about data. You can have programming functionality in databases, but I think that it should be limited to the components that affect data. It is easier to implement new requirements (for example: If the customer wants an API). Normally when they use logic in the database, the rest of the logic that is implemented in the code is more spaghetti-like (random functions, for example). Generally, it is more usual to have more programmers than database administrators (DBAs). Which implementation is the best? | See How much business logic should the database implement? for previous discussion. In general, everyone wants things done in the layer they control. Because then they control it. Every database vendor wants people to put as much logic into the database as possible. Because that locks you into the database. The reasoning is that if multiple applications use the same database, they will reuse code. However programmers emphatically disagree. Databases offer poor programming options. Deploying code to databases is hard. Databases lack basic tools for revision control, interactive editing, deployment and unit testing. Stored procedures tend to involve horrible to debug action at a distance. It has become less common to have multiple applications hit the same database. And if you ever have to make something scale, the one bottleneck that is hardest to fix is your database. My bias is clear. I'm a programmer. But I've been programming for close to 20 years, mostly as a back end programmer who is responsible for data. I've seen the argument many times for moving logic into the database. I've seen systems that did it, and systems which avoided it. I've had to migrate databases, migrate code bases, etc, etc, etc. The worst messes have always been when business logic was in the database. They were always the hardest ones to fix. And I can say that while I've many times encountered the claim that "we moved logic into the database for performance", performance is almost always better with a clean normalized data model, good indexes, a caching layer in front of the database, and sane algorithms implemented in a modern programming language. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/314490",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/222857/"
]
} |
314,537 | Why does :nth-child() iterate from one instead of zero? As shown in this example . Why does it select the first element and not the second when p :nth-child(1) | CSS uses ordinal numbering when identifying elements in an ordered collection. For example, :first-child matches the first child element of a parent, @page :first matches the first page, ::first-line matches the first line in a block, and ::first-letter matches the first letter of the first line of a block. The :nth-child() pseudo-class is a generalization of the :first-child selector, where you read n as the provided number. So :nth-child(1) means 1st child or first child, :nth-child(2) means 2nd child, and so forth. So it is natural that :nth-child(1) selects the first child, and it would be highly confusing and illogical if it selected the second child! CSS uses ordinal numbering because this is how humans naturally talk about such elements. You say the first paragraph of a text, not the zeroth paragraph or paragraph-zero. The reason you even ask the question is probably because you are a programmer, and many programming languages index array and list items from 0. The reason for this is that in low level languages like C, an array is really a pointer to the memory address of the first item and the index is an offset relative to this pointer. So array[0] means address of first item, array[1] means address of the first item plus the size of 1 item, i.e. second item and so on. Many higher-level languages which do not support pointer arithmetic directly have retained the 0-based indexing for consistency and familiarity. For example, all languages with C-derived syntax - including JavaScript, even though arrays in JavaScript are implemented in a completely different way. But this is not at all universal - languages like COBOL, Fortran, Lua, Julia and some Basic's use 1-based indexing. (Visual Basic naturally chose the worst of both worlds by making it configurable.) So it is definitely not like every other language use 0-based indexing. For what it's worth, XPath and XQuery also use 1-based indexing. While most programmers will be familiar with both 1 and 0-based indexing, normal people will naturally count from 1, and CSS is a language designed not just for programmers but for designers and graphic professionals, so it is natural to choose 1-based indexing. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/314537",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/221997/"
]
} |
314,599 | I've been wondering why XML has an L in its name. By itself, XML doesn't "do" anything. It's just a data storage format, not a language! Languages "do" things. The way you get XML to "do" stuff, to turn it into a language proper, is to add xmlns attributes to its root element. Only then does it tell its environment what it's about. One example is XHTML. It's active, it has links, hypertext, styles etc, all triggered by the xmlns . Without that, an XHTML file is just a bunch of data in markup nodes. So why then is XML called a language? It doesn't describe anything, it doesn't interpret, it just is. Edit: Maybe my question should have been broader. Since the answer is currently "because XML was named after SGML, which was named after GML, etc" the question should have been, why are markup languages (like XML) called languages? Oh, and WRT the close votes: no, I'm not asking about the X. I'm asking about the L! | The real answer is XML has an L in the name because a guy named Raymond L orie was among the designers of the first "markup language" at IBM in the 1970'ies. The developers had to find a name for the language so they chose GML because it was the initials of the three developers (Goldfarb, Mosher and Lorie). They then created the backronym Generalized Markup Language . This later became standardized as SGML ( Standardized General Markup Language ), and when XML was created, the developers wanted to retain the ML-postfix to indicate the family relationship to SGML, and they added the X in front because they thought it looked cool. (Even though it doesn't actually make sense - XML is a meta language which allows you to define extensible languages, but XML is not really extensible itself.) As for your second question if XML can legitimately be called a language: Any structured textual (or even binary) format which can be processed computationally can be called a language. A language doesn't "do" anything as such, but some software might process input in the language and "do" something based on it. You note that XML is a "storage format" which is true, but a textual storage format can be called a language, these term are not mutually exclusive. Programming languages are a subset of languages. E.g. HTML and CSS are languages but not programming languages , while JavaScript is a real programming language. That said, there is no formal definition of programming language either, and there is a large grey zone of languages which could be called either data formats or programming languages depending on your point of view. Given this, XML is clearly a language. just not a programming language - though it can be used to define programming languages like XSLT. Your point about namespaces is irrelevant. Namespaces are an optional feature of XML and do not change the semantics of an XML vocabulary. It is just needed to disambiguate element names if the format may contain multiple vocabularies. Edit: reinierpost pointed out that you might have meant something different with the question than what I understood. Maybe you meant that specific vocabularies like XHTML, RSS, XSLT etc. are languages because they associate elements and attributes with particular semantics, but the XML standard itself does not define any semantics for specific elements and attributes, so it does not feel like a "real language". My answer to this would be that XML does define both syntax and semantics, it just defines it at a different level. For example it defines the syntax of elements and attributes and rules about how to process them. XML is a "metalanguage" which is still a kind of language (just like metadata is still data!). As an example EBNF is also clearly a language, but its purpose is to define the syntax of other languages, so it is also a metalanguage. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/314599",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/44898/"
]
} |
314,646 | Having Circle extend Ellipse breaks the Liskov Substition Principle , because it modifies a postcondition: namely, you can set X and Y independently to draw an ellipse, but X must always equal Y for circles. But isn't the problem here caused by having Circle be the subtype of an Ellipse? Couldn't we reverse the relationship? So, Circle is the supertype - it has a single method setRadius . Then, Ellipse extends Circle by adding setX and setY . Calling setRadius on Ellipse would set both X and Y - meaning the postcondition on setRadius is maintained, but you can now set X and Y independently through an extended interface. | But isn't the problem here caused by having Circle be the subtype of an Ellipse? Couldn't we reverse the relationship? The problem with this (and the square/rectangle problem) is falsely assuming a relationship in one domain (geometry) holds in another (behaviour) A circle and ellipse are related if you are viewing them through the prism of geometrical theory. But that is not the only domain you can look at. Object orientated design deals with behaviour . The defining characteristic of an object is the behaviour the object is responsible for. And in the domain of behaviour, a circle and ellipse have such different behaviour that it is probably better not to think of them as related at all. In this domain, an ellipse and a circle have no significant relationship. The lesson here is to choose the domain that makes the most sense for OOD, not to try and shoehorn in a relationship simply because it exists in a different domain. The most common real-world example of this mistake is to assume objects are related (or even the same class) because they have similar data even if their behaviour is very different. This is a common problem when you start constructing objects "data first" by defining where the data goes. You can end up with a class that are related via data that have completely different behaviour. For example, both the payslip and the employee objects might have a "gross salary" attribute, but an employee is not a type of payslip and a payslip is not a type of employee. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/314646",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/13421/"
]
} |
314,796 | I have a program that needs to generate temporary files. It is written for cluster machines. If I saved those files to a system-wide temporary directory (eg: /tmp ), some users complained the program failed because they didn't have proper access to /tmp. But if I saved those files to the working directory, those users also complained they didn't want to see those mysterious files. Which one is a better practice? Should I insist that saving to /tmp is the right approach and defend any failure as "working as intended" (ie. ask your admin for proper permission/access)? | Temporary files have to be stored into the operating system temporary directory for several reasons: The operating system makes it very easy to create those files while ensuring that their names would be unique . Most backup software knows what are the directories containing temporary files, and skips them. If you use the current directory, it could have an important effect on the size of incremental backups if backups are done frequently. The temporary directory may be on a different disk, or in RAM, making the read-write access much, much faster . Temporary files are often deleted during the reboot (if they are in a ramdisk, they are simply lost). This reduces the risk of infinite growth if your app is not always removing the temp files correctly (for instance after a crash). Cleaning temp files from the working directory could easily become messy if the files are stored together with application and user files. You can mitigate this problem by creating a separate directory within the current directory, but this could lead to another problem: The path length could be too long on some platforms. For instance, on Windows, path limits for some APIs, frameworks and applications are terrible , which means that you can easily hit such limit if the current directory is already deep in the tree hierarchy and the names of your temporary files are too long. On servers, monitoring the growth of the temporary directory is often done straight away. If you use a different directory, it may not be monitored, and monitoring the whole disk won't help to easily figure out that it's the temp files which take more and more place. As for the access denied errors, make sure you let the operating system create a temporary file for you. The operating system may, for instance, know that for a given user, a directory other than /tmp or C:\Windows\temp should be used; thus, by accessing those directories directly, you may indeed encounter an access denied error. If you get an access denied even when using the operating system call, well, it simply means that the machine was badly configured; this was already explained by Blrfl . It's up to the system administrator to configure the machine; you don't have to change your application. Creating temporary files is straightforward in many languages. A few examples: Bash: # The next line will create a temporary file and return its path.
path="$(mktemp)"
echo "Hello, World!" > "$path" Python: import tempfile
# Creates a file and returns a tuple containing both the handle and the path.
handle, path = tempfile.mkstemp()
with open(handle, "w") as f:
f.write("Hello, World!"); C: #include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
...
char temp_file[] = "/tmp/tmp.XXXXXX";
int fd = mkstemp(temp_file);
dprintf(fd, "Hello World!!!\n");
close(fd); C#: // Creates a file and returns the path.
var path = Path.GetTempFileName();
File.WriteAllText(path, "Hello, World!"); PHP: # Creates a file and returns the handle.
$temp = tmpfile();
fwrite($temp, "Hello, World!");
fclose($temp); Ruby: require "tempfile"
# Creates a file and returns the file object.
file = Tempfile.new ""
file << "Hello, World!"
file.close Note that in some cases, such as in PHP and Ruby, the file is removed when the handle is closed. That's an additional benefit of using the libraries bundled with the language/framework. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/314796",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/12595/"
]
} |
314,844 | Our codebase is old and new programmers, like myself, quickly learn to do it the way it's done for the sake of uniformity. Thinking that we have to start somewhere, I took it upon myself to refactor a data holder class as such: Removed setter methods and made all fields final (I take " final is good" axiomatically). The setters were used only in the constructor, as it turns out, so this had no side-effects. Introduced a Builder class The Builder class was necessary because the constructor (which is what prompted refactoring in the first place) spans about 3 lines of code. It has a lot of parameters. As luck would have it, a team mate of mine was working on another module and happened to need the setters, because the values he required became available at different points in the flow. Thus the code looked like this: public void foo(Bar bar){
//do stuff
bar.setA(stuff);
//do more stuff
bar.setB(moreStuff);
} I argued that he should use the builder instead, because getting rid of setters allows the fields to remain immutable (they've heard me rant about immutability before), and also because builders allow object creation to be transactional. I sketched out the following pseudocode: public void foo(Bar bar){
try{
bar.setA(a);
//enter exception-throwing stuff
bar.setB(b);
}catch(){}
} If that exception fires, bar will have corrupt data, which would have been avoided with a builder: public Bar foo(){
Builder builder=new Builder();
try{
builder.setA(a);
//dangerous stuff;
builder.setB(b);
//more dangerous stuff
builder.setC(c);
return builder.build();
}catch(){}
return null;
} My teammates retorted that the exception in question will never fire, which is fair enough for that particular area of code, but I believe is missing the forest for the tree. The compromise was to revert to the old solution, namely use a constructor with no parameters and set everything with setters as needed. The rationale was that this solution follows the KISS principle, which mine violates. I'm new to this company (less than 6 months) and fully aware that I lost this one. The question(s) I have are: Is there another argument for using Builders instead of the "old way"? Is the change I propose even really worth it? but really, Do you have any tips for better presenting such arguments when advocating trying something new? | Thinking that we have to start somewhere, I took it upon myself to
refactor a data holder class This is where it went wrong - you made a major architectural change to the codebase such that it became very different to what was expected. You surprised your colleagues. Surprise is only good if it involves a party, the principle of least surprise is golden (even if it involves parties, then I'd know wear clean underpants!) Now if you thought this change was worthwhile, it would have been much better to sketch out the pseudocode showing the change and present it to your colleagues (or at least whoever has the role closest to architect) and see what they thought before doing anything.
As it is, you went rogue, thought the whole codebase was yours to play with as you thought fit. A player on the team, not a team player! I might be being a little harsh on you, but I have been in the opposite place: check out some code and think "WTF happened here?!", only to find the new guy (its nearly always the new guy) decided to change a load of things based on what he thinks the "right way" should be. Screws with the SCM history too which is another major problem. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/314844",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/83924/"
]
} |
314,861 | Suppose that we are modelling a form using DDD; the form may have certain kind of business rules associated with it - perhaps you will need to specify an income if you are not a student, and you are required to list your children if you indicate that you are married. And if you specified a country, then it should have a valid country. Does this kind of validation lives in the domain or application layer? Some other issues I was considering: Certain frameworks, such as Laravel, provides validation rules that can validate input before a request hits the controller. Does it break DDD if validation is done at that level? For cases like determining whether the country is valid, usually I will just query a database table of all the countries in the world. However, in DDD, this is likely (from my understanding) to be done on the domain layer. Is the domain layer allowed to access the DB, or must I use a non-SQL search to determine a valid country? Is it necessary to validate the input both at the application, and domain layer? | Does this kind of validation lives in the domain or application layer? Application. The magic search term you want is anti corruption layer Typically, the message received by your application will be some flavor of DTO. Your anti corruption layer will typically create value types that the domain will recognize. The actual command dispatched to the domain model will be expressed in terms of validated value types. Example: a DepositMoney command would likely include an amount, and a currency type. The DTO representation would probably express the amount as an integer, and the currency code as a string. The anti corruption layer would convert the DTO into a Deposit value type, which would include a validated Amount (which must be non negative) and a validated CurrencyCode (which must be one of the supported codes in the domain). Having successfully parsed the command into types the domain model understands, the command is executed in the domain, which may still reject the command on the grounds that it would violate the business invariant (the account doesn't exist yet, the account is blocked, this particular account isn't allowed to use that currency? etc). In other words, the business validation is going to happen in the domain model, after the anti corruption layer validates the inputs. The implementation of the validation rules will normally live either in the constructor of the value type, or within the factory method used to construct the value type. Basically, you restrict the construction of the objects so that they are guaranteed to be valid, isolating the logic in one place, and invoking it at the process boundaries. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/314861",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/10123/"
]
} |
315,047 | Often times I see questions on the Hot Network Questions list like this that basically ask "how do I draw this arbitrary shape in CSS". Invariably the answer is a couple of blocks of CSS or SVG data with a bunch of seemingly random hard-coded values that form the requested shape. When I look at this, I think 'Yuck! What an ugly block of code. I hope I never see this type of stuff in my project'. However, I see these types of Q&As quite frequently and with a high number of upvotes, so clearly the community doesn't think they are bad. But why is this acceptable? Coming from my back-end experience this makes no sense to me. So why is it OK for CSS/SVG? | Firstly, magic values are avoided in programming by using variables or constants. CSS does not support variables, so even if magic values were frowned on, you don't have much of a choice (except using a preprocessor as SASS, but you wouldn't do that for a single snippet). Secondly, values might not be as magic in a domain specific language like CSS. In programming, a magic number is a number where the meaning or intent is not obvious. If a line says: x += 23; You will ask "why 23"? What is the reasoning? A variable could clarify the intent: x += defaultHttpTimeoutSeconds; This is because a lone number could mean absolutely anything in general purpose code. But consider CSS: background-color: #ffffff;
font-size: 16px; The height and color are not magic, because the meaning is perfectly clear from the context. And the reason for choosing the spefic value is simple because the designers thought it would look good. Introducing a variable would not help anything, since you would just name it something like " defaultBackgroundColor " and " defaultFontSize " anyway. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/315047",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/136927/"
]
} |
315,128 | In this article by Alex Papadimoulis, you can see this snippet: private void attachSupplementalDocuments()
{
if (stateCode == "AZ" || stateCode == "TX") {
//SR008-04X/I are always required in these states
attachDocument("SR008-04X");
attachDocument("SR008-04XI");
}
if (ledgerAmnt >= 500000) {
//Ledger of 500K or more requires AUTHLDG-1A
attachDocument("AUTHLDG-1A");
}
if (coInsuredCount >= 5 && orgStatusCode != "CORP") {
//Non-CORP orgs with 5 or more co-ins require AUTHCNS-1A
attachDocument("AUTHCNS-1A");
}
} I really don't understand this article. I quote: If every business rule constant was stored in some configuration file, life would be much [more ( sic )] difficult for everyone maintaining the software: there’d be a lot of code files that shared one, big file (or, the converse, a whole lot of tiny configuration files); deploying changes to the business rules require not new code, but manually changing the configuration files; and debugging is that much more difficult. This is an argument against having the "500000" constant integer in a configuration file, or the "AUTHCNS-1A" and other string constants. How can this be a bad practice? In this snippet, "500000" is not a number. It's not, for example, the same as: int doubleMe(int a) { return a * 2;} where 2, is a number that needs not be abstracted. Its use is obvious, and it does not represent something that may be reused later on. On the contrary, "500000" is not simply a number. It's a significant value, one that represents the idea of a breakpoint in functionality. This number could be used in more than one place, but it's not the number that you're using; it's the idea of the limit/borderline, below which one rule applies, and above which another. How is referring to it from a configuration file, or even a #define , const or whatever your language provides, worse than including its value? If later on the program, or some other programmer, also requires that borderline, so that the software makes another choice, you're screwed (because when it changes, nothing guarantees you that it will change in both files). That's clearly worse for debugging. In addition, if tomorrow, the government demands "From 5/3/2050, you need to add AUTHLDG-122B instead of AUTHLDG-1A", this string constant is not a simple string constant. It's one that represents an idea; it's just the current value of that idea (which is "the thing that you add if the ledger is above 500k"). Let me clarify. I'm not saying that the article is wrong; I just don't get it; maybe it's not too well explained (at least for my thinking). I do understand that replacing every possible string literal or numerical value with a constant, define, or configuration variable, is not only not necessary, but overcomplicates things, but this particular example does not seem to fall under this category. How do you know that you will not need it later on? Or someone else for that matter? | The author is warning against premature abstraction. The line if (ledgerAmt > 500000) looks like the kind of business rule that you would expect to see for large complex business sytems whose requirements are incredibly complex yet precise and well-documented. Typically those kinds of requirements are exceptional/edge cases rather than usefully reusable logic. Those requirements are typically owned and maintained by business analysts and subject matter experts, rather than by engineers (Note that 'ownership' of requirements by Business Analysts/experts in these cases typically occurs where developers working in specialist fields don't have sufficient domain expertise; although I would still expect full communication/cooperation between developers and the domain experts to protect against ambiguous or poorly written requirements.) When maintaining systems whose requirements are packed full of edge-cases and highly complex logic, there is usually no way to usefully abstract that logic or make it more maintainable; attempts to try building abstractions can easily backfire - not just resulting in wasted time, but also resulting in less maintainable code. How is referring to it from a config file, or even a #define, const or whatever your language provides, worse than including its value? If later on the program, or some other programmer, also requires that borderline, so that the software makes another choice, you're screwed (because when it changes, nothing guarantees you that it will change in both files). That's clearly worse for debugging. This kind of code tends to be guarded by the fact that the code itself probably has a one-to-one mapping to requirements; i.e. when a developer knows that the 500000 figure appears twice in the requirements, that developer also knows that it appears twice in the code. Consider the other (equally likely) scenario where 500000 appears in multiple places in the requirements document, but the Subject Matter Experts decide to only change one of them; there you have an even worse risk that somebody changing the const value might not realise the 500000 is used to mean different things - so the developer changes it in the one and only place he/she finds it in the code, and ends up breaking something which they didn't realise they had changed. This scenario happens a lot in bespoke legal/financial software (e.g. insurance quotation logic) - people who write such documents aren't engineers, and they have no problem copy+pasting entire chunks of the spec, modifying a few words/numbers, but leaving most of it the same. In those scenarios, the best way to deal with copy-paste requirements is to write copy-paste code, and to make the code look as similar to the requirements (including hard-coding all the data) as possible. The reality of such requirements is that they don't usually stay copy+paste for long, and the values sometimes change on a regular basis, but they often don't change in tandem, so trying to rationalise or abstract those requirements out or simplify them any way ends up creating more of a maintenance headache than just translating requirements verbatim into code. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/315128",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/207585/"
]
} |
315,252 | I have used Git at my past two companies for version control. It seems from what I've heard that about 90% of companies use Git over other version control systems. One of the biggest selling points of Git is that it is decentralized, i.e. all repositories are equal; there is no central repository/ source of truth. This was a feature Linus Torvalds championed. But it seems that every company used Git in a centralized manner, much like one would use SVN or CVS. There is always a central repository on a server (usually on GitHub) that people pull from and push to. I have never seen or heard of (in my admittedly limited experience) people using Git in the truly decentralized manner in which it was intended, i.e. pushing and pulling to other colleagues repositories as they saw fit. My questions are: Why don't people use a distributed workflow for Git in practice? Is the ability to work in a distributed manner even important to modern version control, or does it just sound nice? Edit I realized I didn't get across the correct tone in my original question. It sounded like I was asking why anyone would work in a centralized manner when a distributed version control system (DVCS) was so obviously superior. In actuality, what I meant to say was, I don't see any benefits to DVCS at all . Yet I often hear people preaching its superiority, while the real-world seems to agree with my view. | Ahh, but in fact you are using git in a decentralized manner! Let us compare git's predecessor in mindshare, svn. Subversion had only one "repo", one source of truth. When you did a commit, it was to a single, central repo, to which every other developer was committing as well. This sort of worked, but it led to numerous problems, the biggest one being the dreaded merge conflict . These turned out to be anywhere from annoying to nightmarish to resolve. And with one source of truth, they had a nasty habit of bringing everyone's work to a screeching halt until they were resolved. Merge conflicts certainly exist with git, but they are not work-stopping events and are much easier and faster to resolve; they generally affect only the developers involved with the conflicting changes, rather than everyone. Then there is the whole single-point-of-failure, and the attendant problems that brings. If your central svn repo dies somehow, you're all screwed until it can be restored from backup, and if there were no backups, you're all doubly screwed. But if the "central" git repo dies, you can restore from backup, or even from one of the other copies of the repo which are on the CI server, developers' workstations, etc. You can do this precisely because they are distributed, and each developer has a first-class copy of the repo. On the other hand, since your git repo is a first-class repo in its own right, when you commit, your commits go to your local repo. If you want to share them with others, or to the central source of truth, you must explicitly do this with a push to a remote. Other developers can then pull down those changes when it's convenient for them, rather than having to check svn constantly to see if someone's done something that will screw them up. The fact that, instead of pushing directly to other developers, you push changes to them indirectly via another remote repo, doesn't matter much. The important part from our perspective is that your local copy of the repo is a repo in its own right. In svn, the central source of truth is enforced by the design of the system. In git, the system doesn't even have this concept; if there is a source of truth, it is decided externally. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/315252",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/137068/"
]
} |
315,454 | I've been working on a multi-threaded JavaScript runtime implementation for the past week. I have a proof of concept made in C++ using JavaScriptCore and boost. The architecture is simple: when the runtime finishes evaluating the main script it launches and joins a thread-pool, which begins picking tasks from a shared priority queue, if two tasks try to access a variable concurrently it gets marked atomic and they contend for access. The problem is that when I show this design to a JavaScript programmer I get extremely negative feedback, and I have no idea why. Even in private, they all say that JavaScript is meant to be single threaded, that existing libraries would have to be rewritten, and that gremlins will spawn and eat every living being if I continue working on this. I originally had a native coroutine implementation (using boost contexts) in place as well, but I had to ditch it (JavaScriptCore is pedantic about the stack), and I didn't want to risk their wrath so I decided against mentioning it. What do you think? Is JavaScript meant to be single threaded, and should it be left alone? Why is everyone against the idea of a concurrent JavaScript runtime? Edit: The project is now on GitHub , experiment with it yourself and let me know what you think. The following is a picture of promises running on all CPU cores in parallel with no contention: | 1) Multithreading is extremely hard, and unfortunately the way you've presented this idea so far implies you're severely underestimating how hard it is. At the moment, it sounds like you're simply "adding threads" to the language and worrying about how to make it correct and performant later. In particular: if two tasks try to access a variable concurrently it gets marked atomic and they contend for access. ... I agree that atomic variables won't solve everything, but working on a solution for the synchronisation problem is my next goal. Adding threads to Javascript without a "solution for the synchronisation problem" would be like adding integers to Javascript without a "solution for the addition problem". It's so fundamental to the nature of the problem that there's basically no point even discussing whether multithreading is worth adding without a specific solution in mind, no matter how badly we might want it. Plus, making all variables atomic is the sort of thing that's likely to make a multithreaded program perform worse than its singlethreaded counterpart, which makes it even more important to actually test performance on more realistic programs and see if you're gaining anything or not. It's also not clear to me whether you're trying to keep threads hidden from the node.js programmer or if you plan on exposing them at some point, effectively making a new dialect of Javascript for multithreaded programming. Both options are potentially interesting, but it sounds like you haven't even decided which one you're aiming for yet. So at the moment, you're asking programmers to consider switching from a singlethreaded environment to a brand new multithreaded environment that has no solution for the synchronisation problem and no evidence it improves real-world performance, and seemingly no plan for resolving those issues. That's probably why people aren't taken you seriously. 2) The simplicity and robustness of the single event loop is a huge advantage. Javascript programmers know that the Javascript language is "safe" from race conditions and other extremely insidious bugs that plague all genuinely multithreaded programming. The fact that they need strong arguments to convince them to give up that safety does not make them closed-minded, it makes them responsible. Unless you can somehow retain that safety, anyone who might want to switch to a multithreaded node.js would probably be better off switching to a language like Go that's designed from the ground up for multithreaded applications. 3) Javascript already supports "background threads" (WebWorkers) and asynchronous programming without directly exposing thread management to the programmer. Those features already solve a lot of the common use cases that affect Javascript programmers in the real world, without giving up the safety of the single event loop. Do you have any specific use cases in mind that these features don't solve, and that Javascript programmers want a solution for? If so, it'd be a good idea to present your multithreaded node.js in the context of that specific use case. P.S. What would convince me to try switching to a multithreaded node.js implementation? Write a non-trivial program in Javascript/node.js that you think would benefit from genuine multithreading. Do performance tests on this sample program in normal node and your multithreaded node. Show me that your version improves runtime performance, responsiveness and usage of multiple cores to a significant degree, without introducing any bugs or instability. Once you've done that, I think you'll see people much more interested in this idea. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/315454",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/217779/"
]
} |
315,520 | I've encountered many people who are dogmatically against anything which can be considered "optimization" in the general English-language sense of the word, and they very often quote verbatim the (partial) quote "premature optimization is the root of all evil" as a justification for their stance, implying that they interpret whatever I'm talking about to be "premature optimization". However, these views are sometimes so ridiculously entrenched that they dismiss pretty much any kind of algorithmic or data-structure deviations from the purest "naive" implementation... or at least any deviations from the way they've done things before. How can one approach people like this in a way to make them "open their ears" again after they shut down from hearing about "performance" or "optimization"? How do I discuss a design/implementation topic which has an impact on performance without having people instantly think: "This guy wants to spend two weeks on ten lines of code?" Now, the stance of whether "all optimization is premature and therefore evil" or not has already been covered here as well as in other corners of the Web , and it has already been discussed how to recognize when optimization is premature and therefore evil , but unfortunately there are still people in the real world who are not quite as open to challenges to their faith in Anti-Optimization. Previous attempts A few times, I've tried supplying the complete quote from Donald Knuth in order to explain that "premature optimization is bad" ↛ "all optimization is bad": We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil. Yet we should not pass up our opportunities in that critical 3%. However, when supplying the entire quote, these people sometimes actually become more convinced that what I'm doing is Premature Optimization™ and dig in and refuse to listen. It's almost as if the word "optimization" scares them: On a couple of occasions, I was able to propose actual performance-improving code changes without them being vetoed by simply avoiding the use of the word "optimiz(e|ation)" (and "performance" as well -- that word is scary too) and instead using some expression like "alternative architecture" or "improved implementation". For this reason, it really seems like this truly is dogmatism and not them in fact evaluating what I say critically and then dismissing it as not necessary and/or too costly. | It seems you are looking for shortcuts not to try out the "purest naive implementation" first, and directly implement a "more sophisticated solution because you know beforehand that the naive implementation will not do it". Unfortunately, this will seldom work — when you do not have hard facts or technical arguments to prove that the naive implementation is or will be too slow, you are most probably wrong, and what you are doing is premature optimization. And trying to argue with Knuth is the opposite of having a hard fact. In my experience, you will either have to bite the bullet and try the "naive implementation" first (and will probably be astonished how often this is fast enough), or you will at least make a rough estimation about the running time, like: "The naive implementation will be O(n³), and n is bigger than 100.000; that will run some days, while the not-so-naive implementation will run in O(n), which will take only a few minutes" . Only with such arguments at hand you can be sure your optimization is not premature. There is IMHO only one exception from this: when the faster solution is also the simpler and cleaner one, then you should use the faster solution right from the start. The standard example is the one of using a dictionary instead of a list to avoid unnecessary loop code for lookups, or the usage of a good SQL query which gives you exactly the one result record you need, instead of a big resultset which has to be filtered afterwards in code. If you have such a case, do not argue about performance - the performance might be an additional, but most probably irrelevant benefit, and when you mention it, people might be tempted to use Knuth against you. Argue about readability, shorter code, cleaner code, maintainability - no need to "mask" anything here, but because those (and only those) are the correct arguments here. To my experience, the latter case is rare - the more typically case is one can first implement a simple, naive solution which is better understandable and less error prone than a more complicated, but probably faster one. And of course, you should know the requirements and the use case well enough to know what performance is acceptable, and when things become "too slow" in the eyes of your users. In an ideal world, you would get a formal performance spec by your customer, but in real world projects, required performance is often a grey area, something your users will only tell you when they note the program behaves "too slow" in production. And often, that is the only working way of finding out when something is too slow — the user feedback, and then you do not need to cite Knuth to convince your teammates that their "naive implementation" was not sufficient. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/315520",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/105784/"
]
} |
315,583 | I have a class, lets call it Calculator , with a method like this: public double[] performCalculation(double[] someInData) This method can generate a number of non fatal warnings (represented as an array of strings with warning names, such as InDataOutOfBounds ). What is the most elegant and javaesque way to return these warnings to the caller? I see a few possibilities, none that I am particularly fond of: Return an instance of a custom class containing both the result of the calculation and the array of warings. Let performCalculation accept an additional argument String[] warnings that it populates with the warnings. This might be a no go, as it seems I can not resize the passed array, and there is no way of knowing in advance how many warnings there will be. Keep performCalculation as it is, but add a method String[] getWarnings() to Calculator , returning the warnings from the latest calculation. EDIT: I am aware of this question. I would argue that this is not a duplicate, as the top answer to that question says that the answer is depending on the situation. This is a more specific case, that can get a more specific answer. | Yes. Do this. It's Java, do not fear classes. This is what classes are for. Really bad. It is slightly less bad with a Collection instead of an array, but you're still asking for ad-hoc ill-specified semantics that will never be used quite right. Can work if you are absolutely, positively certain that you will never, ever use Calculator except in a single-flow-of-control situation. Unfortunately, parallelizing things is one of the more common changes required after the fact. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/315583",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/224262/"
]
} |
315,703 | A good quarter of a century ago when I was learning C++, I was taught that interfaces should be forgiving and as far as possible not care about the order that methods were called since the consumer may not have access to the source or documentation in lieu of this. However, whenever I've mentored junior programmers and senior devs have overheard me, they've reacted with astonishment which has got me wondering whether this was really a thing or if it has just gone out of vogue. As clear as mud? Consider an interface with these methods (for creating data files): OpenFile
SetHeaderString
WriteDataLine
SetTrailerString
CloseFile Now you could of course just go thru these in order, but say you didn't care about the file name (think a.out ) or what header and trailer string were included, you could just call AddDataLine . A less extreme example might be omitting the headers and trailers. Yet another might be setting the header and trailer strings before the file has been opened. Is this a principle of interface design that is recognised or just the POLA way before it was given a name? N.B. don't get bogged down in the minutiae of this interface, it is just an example for the sake of this question. | One way in which you can stick to the principle of least astonishment is to consider other principles such as ISP and SRP , or even DRY . In the specific example you've given, the suggestion seems to be that there's a certain dependency of ordering for manipulating the file; but your API controls both the file access and the data format, which smells a bit like a violation of SRP. Edit/Update: it also suggests that the API itself is asking the user to violate DRY, because they will need to repeat the same steps every time they use the API . Consider an alternative API where the IO operations are separate from the data operations. and where the API itself 'owns' the ordering: ContentBuilder SetHeader( ... )
AddLine( ... )
SetTrailer ( ... ) FileWriter Open(filename)
Write(content) throws InvalidContentException
Close() With the above separation, the ContentBuilder doesn't need to actually "do" anything apart from store the lines/header/trailer (Maybe also a ContentBuilder.Serialize() method which knows the order) . By following other SOLID principles it no longer matters whether you set the header or trailer before or after adding lines, because nothing in the ContentBuilder is actually written to file until its passed to FileWriter.Write . It also has the added benefit of being a little more flexible; for example, it might be useful to write the content out to a diagnostic logger, or maybe pass it across a network instead of writing it directly to a file. While designing an API you should also consider error reporting, whether that's a state, a return value, an exception, a callback, or something else. The user of the API will probably expect to be able to programmatically detect any violations of its contracts, or even other errors which it can't control such as file I/O errors. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/315703",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/73508/"
]
} |
315,718 | I'm writing classes that "must be used in a specific way" (I guess all classes must...). For example, I create the fooManager class, which requires a call to, say, Initialize(string,string) . And, to push the example a little further, the class would be useless if we don't listen to its ThisHappened action. My point is, the class I'm writing requires method calls. But it will compile just fine if you don't call those methods and will end up with an empty new FooManager. At some point, it will either not work, or maybe crash, depending on the class and what it does. The programmer that implements my class would obviously look inside it and realize "Oh, I didn't call Initialize!", and it'd be fine. But I don't like that. What I would ideally want is the code to NOT compile if the method wasn't called; I'm guessing that's simply not possible. Or something that would immediately be visible and clear. I find myself troubled by the current approach I have here, which is the following: Add a private Boolean value in the class, and check everywhere necessary if the class is initialized ; if not, I will throw an exception saying "The class wasn't initialized, are you sure you're calling .Initialize(string,string) ?". I'm kind of okay with that approach, but it leads to a lot of code that is compiled and, in the end, not necessary to the end user. Also, it's sometimes even more code when there are more methods than just an Initiliaze to call. I try to keep my classes with not too many public methods/actions, but that's not solving the problem, just keeping it reasonable. What I'm looking for here is: Is my approach correct? Is there a better one? What do you guys do/advise? Am I trying to solve a non-issue? I've been told by colleagues it's the programmer's to check the class before trying to use it. I respectfully disagree, but that's another matter I believe. Put it simply, I'm trying to figure out a way to never forget to implement calls when that class is reused later, or by someone else. CLARIFICATIONS : To clarify many questions here : I'm definitely NOT only talking about the Initialisation part of a class, but rather it's whole lifetime. Prevent colleagues to call a method twice, making sure they call X before Y, etc. Anything that would end up being mandatory and in documentation, but that I would like in code and as simple and small as possible. I really liked the idea of Asserts, though I'm quite sure I'll need to mix some other ideas as Asserts will not always be possible. I'm using the C# language ! How did I not mention that?! I'm in a Xamarin environment and building mobile apps usually using about 6 to 9 projects in a solution, including PCL's, iOS, Android and Windows projects. I've been a developer for about a year and a half (school and work combined), hence my sometimes ridiculous statements\questions. All that is probably irrelevant here, but too much information isn't always a bad thing. I can't always put everything that is mandatory in the constructor, due to platform restrictions and the use of Dependency Injection, having parameters other than Interfaces is off the table. Or maybe my knowledge is not sufficient, which is highly possible.
Most of the time it's not an Initialisation issue, but more how can I make sure he registered to that event ? how can I make sure he didn't forget to "stop the process at some point" Here I remember an Ad fetching class. As long as the view where the Ad is visible is visible, the class would fetch a new ad every minute. That class needs a view when constructed where it can display the Ad, that could go in a parameter obviously. But once the view is gone, StopFetching() must be called. Otherwise the class would keep fetching ads for a view that isn't even there, and that's bad. Also, that class has events that must bé listened to, like "AdClicked" for example. Everything works fine if not listened to, but we lose tracking of analytics there if taps aren't registered. The Ad still works though, so the user and developer won't see a difference, and analytics will just have wrong data. That needs to be, avoided, but I'm not sure how developer can know they must register to the tao event. That is a simplified example though, but the idea is there, "make sure he uses the public Action that is available" and at the right times of course! | In such cases, it is best to use the type system of your language to help you with proper initialization. How can we prevent a FooManager from being used without being initialized? By preventing a FooManager from being created without the necessary information to properly initialize it. In particular, all initialization is the responsibility of the constructor. You should never let your constructor create an object in an illegal state. But callers need to construct a FooManager before they can initialize it, e.g. because the FooManager is passed around as a dependency. Don't create a FooManager if you don't have one. What you can do instead is pass an object around that lets you retrieve a fully constructed FooManager , but only with the initialization information. (In functional-programming speak, I'm suggesting you use partial application for the constructor.) E.g.: ctorArgs = ...;
getFooManager = (initInfo) -> new FooManager(ctorArgs, initInfo);
...
getFooManager(myInitInfo).fooMethod(); The problem with this is that you have to supply the init info every time you access the FooManager . If it's necessary in your language, you can wrap the getFooManager() operation in a factory-like or builder-like class. I really want to do runtime checks that the initialize() method was called, rather than using a type-system-level solution. It is possible to find a compromise. We create a wrapper class MaybeInitializedFooManager that has a get() method that returns the FooManager , but throws if the FooManager wasn't fully initialized. This only works if the initialization is done through the wrapper, or if there is a FooManager#isInitialized() method. class MaybeInitializedFooManager {
private final FooManager fooManager;
public MaybeInitializedFooManager(CtorArgs ctorArgs) {
fooManager = new FooManager(ctorArgs);
}
public FooManager initialize(InitArgs initArgs) {
fooManager.initialize(initArgs);
return fooManager;
}
public FooManager get() {
if (fooManager.isInitialized()) return fooManager;
throw ...;
}
} I don't want to change the API of my class. In that case, you'll want to avoid the if (!initialized) throw; conditionals in each and every method. Fortunately, there is a simple pattern to solve this. The object you provide to users is just an empty shell that delegates all calls to an implementation object. By default, the implementation object throws an error for each method that it wasn't initialized. However, the initialize() method replaces the implementation object with a fully-constructed object. class FooManager {
private CtorArgs ctorArgs;
private Impl impl;
public FooManager(CtorArgs ctorArgs) {
this.ctorArgs = ctorArgs;
this.impl = new UninitializedImpl();
}
public void initialize(InitArgs initArgs) {
impl = new MainImpl(ctorArgs, initArgs);
}
public X foo() { return impl.foo(); }
public Y bar() { return impl.bar(); }
}
interface Impl {
X foo();
Y bar();
}
class UninitializedImpl implements Impl {
public X foo() { throw ...; }
public Y bar() { throw ...; }
}
class MainImpl implements Impl {
public MainImpl(CtorArgs c, InitArgs i);
public X foo() { ... }
public Y bar() { ... }
} This extracts the main behaviour of the class into the MainImpl . | {
"source": [
"https://softwareengineering.stackexchange.com/questions/315718",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/151303/"
]
} |
315,810 | Is there any good reason to supply a 32-bit version along with a 64-bit version of any software targeted at modern desktop machines, running modern 64-bit operating systems on 64-bit hardware? It seems that 64-bit software would be more efficient, allow for higher memory usage if needed, etc. Apple even uses 64-bit processors for their phones, even though they only have 1-2 GB of RAM, way below the 4 GB limit for 32-bit CPU's. | Benefits of 32-bit software in 64-bit environments Lower memory footprint, especially in pointer-heavy applications, 64-bit vs 32-bit can easily double the memory requirements. Object files are smaller as well. Compatibility with 32-bit environments. Memory leaks are hard capped to 2 GB, 3 GB, or 4 GB and won't swamp the entire system. Drawbacks of 32-bit software in 64-bit environments 2 GB, 3 GB, or 4 GB memory limit per process. (Just per process, in sum multiple 32-bit processes may use the full available system memory.) Not using additional registers and instruction set extensions depending on x64. This is highly compiler and CPU specific. May require 32-bit versions of all (most Linux distributions) or uncommon (most Windows versions) libraries and run time environments. If a 32-bit version of a shared library is loaded exclusively for your application, and that counts towards your footprint. No difference at all if you are linking statically. Other aspects Drivers are usually not an issue. Only user-space libraries should differ between 32-bit and 64-bit, not the API of kernel modules. Beware of different default widths for integer datatypes, additional testing needed. The 64-bit CPU architecture may not even support 32-bit at all. Certain techniques like ASLR and others depending on a much larger address space than physical memory won't work well (or at all) in a 32-bit execution mode. Unless comparing a very specific CPU architecture, operating system and library infrastructure here, I won't be able to go into more details. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/315810",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/128997/"
]
} |
315,964 | Most of time while writing loops I usually write wrong boundary conditions(eg: wrong outcome) or my assumptions about loop terminations are wrong(eg: infinitely running loop). Although I got my assumptions correct after some trial and error but I got too frustrated because of the lack of correct computing model in my head. /**
* Inserts the given value in proper position in the sorted subarray i.e.
* array[0...rightIndex] is the sorted subarray, on inserting a new value
* our new sorted subarray becomes array[0...rightIndex+1].
* @param array The whole array whose initial elements [0...rightIndex] are
* sorted.
* @param rightIndex The index till which sub array is sorted.
* @param value The value to be inserted into sorted sub array.
*/
function insert(array, rightIndex, value) {
for(var j = rightIndex; j >= 0 && array[j] > value; j--) {
array[j + 1] = array[j];
}
array[j + 1] = value;
}; The mistakes that I did initially were: Instead of j >= 0 I kept it j > 0. Got confused whether array[j+1] = value or array[j] = value. What are tools/mental models to avoid such mistakes? | Test No, seriously, test. I've been coding for over 20 years and I still don't trust myself to write a loop correctly the first time. I write and run tests that prove it works before I suspect it works. Test each side of every boundary condition. For example a rightIndex of 0 should do what? How about -1? Keep it simple If others can't see what it does at a glance you're making it too hard. Please feel free to ignore performance if it means you can write something easy to understand. Only make it faster in the unlikely event that you really need to. And even then only once you're absolutely sure you know exactly what is slowing you down. If you can achieve an actual Big O improvement this activity may not be pointless, but even then, make your code as readable as possible. Off by one Know the difference between counting your fingers and counting the spaces between your fingers. Sometimes the spaces are what is actually important. Don't let your fingers distract you. Know whether your thumb is a finger. Know whether the gap between your pinky and thumb counts as a space. Comments Before you get lost in the code try to say what you mean in English. State your expectations clearly. Don't explain how the code works. Explain why you're having it do what it does. Keep implementation details out of it. It should be possible to refactor the code without needing to change the comment. The best comment is a good name. If you can say everything you need to say with a good name, DON'T say it again with a comment. Abstractions Objects, functions, arrays, and variables are all abstractions that are only as good as the names they are given. Give them names that ensure when people look inside them they won't be surprised by what they find. Short names Use short names for short lived things. i is a fine name for an index in a nice tight loop in a small scope that makes it's meaning obvious. If i lives long enough to get spread out over line after line with other ideas and names that can be confused with i then it's time to give i a nice long explanatory name. Long names Never shorten a name simply due to line length considerations. Find another way to lay out your code. Whitespace Defects love to hide in unreadable code. If your language lets you choose your indentation style at least be consistent. Don't make your code look like a stream of word wrapped noise. Code should look like it's marching in formation. Loop constructs Learn and review the loop structures in your language. Watching a debugger highlight a for(;;) loop can be very instructive. Learn all the forms: while , do while , while(true) , for each . Use the simplest one you can get away with. Look up priming the pump . Learn what break and continue do if you have them. Know the difference between c++ and ++c . Don't be afraid to return early as long as you always close everything that needs closing. Finally blocks or preferably something that marks it for automatic closing when you open it: Using statement / Try with Resources . Loop alternatives Let something else do the looping if you can. It's easier on the eyes and already debugged. These come in many forms: collections or streams that allow map() , reduce() , foreach() , and other such methods that apply a lambda. Look for specialty functions like Arrays.fill() . There is also recursion but only expect that to make things easy in special cases. Generally don't use recursion until you see what the alternative would look like. If someone tells you tail recursion will magically keep you from blowing the stack then first check if your language optimizes tail recursion. Not all of them do. Oh, and test. Test, test, test. Did I mention testing? There was one more thing. Can't remember. Started with a T... | {
"source": [
"https://softwareengineering.stackexchange.com/questions/315964",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/198298/"
]
} |
316,208 | I'm building a RESTful API that supports queuing long-running tasks for eventual handling. The typical workflow for this API would be: User fills in form Client posts data to API API returns 202 Accepted Client redirects user to a unique URL for that request ( /results/{request_id} ) ~eventually~ Client visits URL again, and sees the results on that page. My trouble is on step 6. Any time a user visits the page, I file a request to my API ( GET /api/results/{request_id} ). Ideally, the task will have been completed by now, and I'd return a 200 OK with the results of their task. But users are pushy, and I expect many overzealous refreshes, when the result is not yet finished processing. What is my best option for a status code to indicate that: this request exists, it's not done yet, but it also hasn't failed. I don't expect a single code to communicate all of that, but I'd like something that lets me pass metadata instead of having the client expect content. It could make sense to return a 202, since that would have no other meaning here: it's a GET request, so nothing is possibly being "accepted." Would that be a reasonable choice? The obvious alternative to all this -- which functions, but defeats one purpose of status codes -- would be to always include the metadata: 200 OK
{
status: "complete",
data: {
foo: "123"
}
} ...or... 200 OK
{
status: "pending"
} Then client-side, I would (sigh) switch on response.data.status to determine whether the request was completed. Is this what I should be doing? Or is there a better alternative? This just feels so Web 1.0 to me. | HTTP 202 Accepted (HTTP/1.1) You are looking for HTTP 202 Accepted status. See RFC 2616 : The request has been accepted for processing, but the processing has not been completed. HTTP 102 Processing (WebDAV) RFC 2518 suggests using HTTP 102 Processing : The 102 (Processing) status code is an interim response used to
inform the client that the server has accepted the complete request,
but has not yet completed it. but it has a caveat: The server MUST send a final response after the request has been completed. I'm not sure how to interpret the last sentence. Should the server avoid sending anything during the processing, and respond only after the completion? Or it only forces to end the response only when the processing terminates? This could be useful if you want to report progress. Send HTTP 102 and flush response byte by byte (or line by line). For instance, for a long but linear process, you can send one hundred dots, flushing after each character. If the client side (such as a JavaScript application) knows that it should expect exactly 100 characters, it can match it with a progress bar to show to the user. Another example concerns a process which consists of several non-linear steps. After each step, you can flush a log message which would eventually be displayed to the user, so that the end user could know how the process is going. Issues with progressive flushing Note that while this technique has its merits, I wouldn't recommend it . One of the reasons is that it forces the connection to remain open, which could hurt in terms of service availability and doesn't scale well. A better approach is to respond with HTTP 202 Accepted and either let the user to get back to you later to determine whether the processing ended (for instance by calling repeatedly a given URI such as /process/result which would respond with HTTP 404 Not Found or HTTP 409 Conflict until the process finishes and the result is ready), or notify the user when the processing is done if you're able to call the client back for instance through a message queue service ( example ) or WebSockets. Practical example Imagine a web service which converts videos. The entry point is: POST /video/convert which takes a video file from the HTTP request and does some magic with it. Let's imagine that the magic is CPU-intensive, so it cannot be done in real-time during the transfer of the request. This means that once the file is transferred, the server will respond with a HTTP 202 Accepted with some JSON content, meaning “Yes, I got your video, and I'm working on it; it will be ready somewhere in the future and will be available through the ID 123.” The client has a possibility to subscribe to a message queue to be notified when the processing finishes. Once it is finished, the client can download the processed video by going to: GET /video/download/123 which leads to an HTTP 200 . What happens if the client queries this URI before receiving the notification? Well, the server will respond with HTTP 404 since, indeed, the video doesn't exist yet. It may be currently prepared. It may never been requested. It may exist some time in the past and be removed later. All that matters is that the resulting video is not available. Now, what if the client cares not only about the final video, but also about the progress (which would be even more important if there is no message queue service or any similar mechanism)? In this case, you can use another endpoint: GET /video/status/123 which would result a response similar to this: HTTP 200
{
"id": 123,
"status": "queued",
"priority": 2,
"progress-percent": 0,
"submitted-utc-time": "2016-04-19T13:59:22"
} Doing the request over and over will show the progress until it's: HTTP 200
{
"id": 123,
"status": "done",
"progress-percent": 100,
"submitted-utc-time": "2016-04-19T13:59:22"
} It is crucial to make a difference between those three types of requests: POST /video/convert queues a task. It should be called only once: calling it again would queue an additional task. GET /video/download/123 concerns the result of the operation: the resource is the video. The processing—that is what happened under the hood to prepare the actual result prior to request and independently to the request—is irrelevant here. It can be called once or several times. GET /video/status/123 concerns the processing per se . It doesn't queue anything. It doesn't care about the resulting video. The resource is the processing itself. It can be called once or several times. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/316208",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/124633/"
]
} |
316,217 | Why is it that in nearly all modern programming languages (Go, Rust, Kotlin, Swift, Scala, Nim, even Python last version) types always come after the variable name in the variable declaration, and not before? Why x: int = 42 and not int x = 42 ? Is the latter not more readable than the former? Is it just a trend or are there any really meaningful reasons behind this solution? | All of the languages you mentioned support type inference , which means the type is an optional part of the declaration in those languages because they're smart enough to fill it in themselves when you provide an initialization expression that has an easily-determined type. That matters because putting the optional parts of an expression farther to the right reduces parsing ambiguity, and increases consistency between the expressions that do use that part and the ones that don't. It's just simpler to parse a declaration when you know the var keyword and the variable name are both mandatory before you get to the optional stuff. In theory, all of those things which make it easier to parse for computers should improve overall readability for humans too, but that's a lot more debatable. This argument gets especially strong when you consider all the optional type modifiers that a "non-modern" language like C++ has, such as * for pointers, & for references, const , volatile and so on. Once you throw in commas for multiple declarations, you start to get some really strange ambiguities like int* a, b; not making b a pointer. Even C++ now supports "type on the right" declarations in the form of auto x = int{ 4 }; , and it does have some advantages . | {
"source": [
"https://softwareengineering.stackexchange.com/questions/316217",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/225212/"
]
} |
316,407 | I often find myself struggling to decide which of these two ways to use when I require to use common data across some methods in my classes. What would be a better choice? In this option, I can create an instance variable to avoid the need of having to declare additional variables, and also to avoid defining method parameters, but it may be not so clear where those variables are being instantiated/modified: public class MyClass {
private int var1;
MyClass(){
doSomething();
doSomethingElse();
doMoreStuff();
}
private void doSomething(){
var1 = 2;
}
private void doSomethingElse(){
int var2 = var1 + 1;
}
private void doMoreStuff(){
int var3 = var1 - 1;
}
} Or just instantiating local variables and passing them as arguments? public class MyClass {
MyClass(){
int var1 = doSomething();
doSomethingElse(var1);
doMoreStuff(var1);
}
private int doSomething(){
int var = 2;
return var;
}
private void doSomethingElse(int var){
int var2 = var + 1;
}
private void doMoreStuff(int var){
int var3 = var - 1;
}
} If the answer is that they are both correct, which one is seen/used more often? Also, if you can provide additional pros/cons for each option would be very valuable. | I'm surprised this hasn't mentioned yet... It depends if var1 is actually part of your object's state . You assume that both of these approaches are correct and that it's just a matter of style. You are wrong. This is entirely about how to properly model. Similarly, private instance methods exist to mutate your object's state . If that's not what your method is doing then it should be private static . | {
"source": [
"https://softwareengineering.stackexchange.com/questions/316407",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/225483/"
]
} |
316,526 | We're looking for a good source control and project management solution at my workplace and I've suggested creating a GitHub organization and private repositories. I love GitHub for many reasons, but this isn't about GitHub (in fact my colleagues are going to present points in favor of competing platforms) - it's about storing our private code online . I'm trying to understand whether this is a good idea or not. It definitely seems advantageous because it removes need for server costs (at least directly) and also makes it easier to search code (everything is online). However our team is undecided and leads me to my question, what should we be considering in order to make this decision? | As a pro, If your company's office burns down, the code is still on the server. If your company's office doesn't burn down, but the server on which your git repository is located DOES, then you still have a local copy. If you host your repository on your server in your company's office building (like you would with a Network shared drive...?), then if the company's office burns down, you lose both. Of course, you still need backups as usual... Feel free to replace "burns down" with "gets infected with ransomware". Basically, availability is up. As a con, You'd have to share your files with the 3rd party that will host your code. If you've got really big company secrets, this might not be allowed. For instance, if you have a database containing personal info from european citizens, you might not be allowed to host your code on a third party from the USA - because they'd be subject to US law and thus couldn't be relied upon to uphold EU privacy laws. Even if it is not a legal issue, you should be aware that the third party could be bribed into giving your private files away. This would likely be really bad for the third party (huge reputation penalty), but it could happen. Basically, confidentiality is down. If you are okay with trading confidentiality for availability, then hosting your private code online with a third party is a good idea. Otherwise, don't. You could explain the trade-offs to allow your boss to make an intelligent decision - but you might hear "no". That's what can happen if you give someone a decision. If your boss says no, then that's that. I don't think forcibly convincing your boss is a very good idea. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/316526",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/68834/"
]
} |
316,531 | I have written a class which represents a SQLite Trigger. public SQLiteTrigger(string Name,
string On,
TriggerStartType StartType,
TriggerEventType EventType) : this(...)
public SQLiteTrigger(string Name,
string On,
TriggerStartType StartType,
TriggerEventType EventType,
string TriggerSQL) : this(...)
public SQLiteTrigger(string Name,
string On,
TriggerStartType StartType,
TriggerEventType EventType,
string TriggerSQL,
string When) I'm thinking about adding even more constructors with more parameters, so nearly every Trigger creation could be a one liner. Is it against any design rules or considered as bad practice when you give a class many constructors and assign via them as many properties as possible? | Excuse me while I react to everyone suggesting the builder pattern here: This is C#, not Java! A main reason for Joshua Bloch's builder pattern is to hack around Java's lack of named arguments. This gives Java a way around the evil telescoping constructor pattern . You're in C#. You have named arguments! Another reason for Joshua Bloch's builder pattern is to separate required arguments from optional arguments (those that have a good default value) and allow any combination of optional arguments to be set. This is needed because Java doesn't natively support optional arguments. You're in C#. You have optional arguments! That means the 3 constructors you've listed should be replaced with just 1: public SQLiteTrigger(
string Name,
string On,
TriggerStartType StartType,
TriggerEventType EventType,
string TriggerSQL = "some default string",
string When = "some other default string"
) And now, unlike before, clients can change When without fiddling with TriggerSQL . new SQLiteTrigger(
Name: "MyTrigger",
On: "Whatever",
StartType: new TriggerStartType(),
EventType: new TriggerEventType(),
When: "Now"
) Compared to the Bloch builder this is Easier for clients (humans) to use Easier to implement A flexible design Don't get me wrong, I love the Bloch builder. In Java. Don't use hacky workarounds in languages that don't need them. Now you asked about good style and you mentioned adding more parameters. Be careful of adding too many. This is called arity . Too much arity is a code smell that may indicate a flaw in your underlying design. There are ways to redesign to reduce airity . If those additional parameters are more complicated than the simple required vs. optional (with a good known default) pattern then you might be interested in the next step beyond the Bloch builder. The DSL builder . | {
"source": [
"https://softwareengineering.stackexchange.com/questions/316531",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/225638/"
]
} |
316,812 | I am approaching a project where I'll be having to implement a database with my boss; we're a very small start up so the work environment is deeply personal. He had given me one of the company databases before and it completely went against what I was taught (and read about) in school for RDBMS. For example, there are entire databases here that consist of one table (per independent database). One of those tables is 20+ columns long and for context, here are some of the column names from one table: lngStoreID | vrStoreName | lngCompanyID | vrCompanyName | lngProductID | vrProductName The point being is that where he should have individual tables that hold the entity data (name, size, date purchased, etc.) he shoves it all in one large table per database. I want to improve this design, but I am not sure why a properly-normalized and segmented data model would actually improve this product. While I am familiar with database design from college and I understand how to do it, I am unsure why this actually improves databases. Why does a good relational schema improve a database? | The performance argument is usually the one which is most intuitive. You especially want to point out how it will be difficult to add good indexes in an incorrectly normalized database (note: there are edge-cases where denormalization can in fact improve performance, but when you are both inexperienced with relational databases you will likely not easily see these cases). Another is the storage size argument. A denormalized table with lots of redundancies will require far more storage. This also plays into the performance aspect: the more data you have, the slower your queries will be. There is also an argument which is a bit harder to understand, but is in fact more important because you can't solve it by throwing more hardware at it. That's the data consistency problem. A properly normalized database will take care by itself that a product with a specific ID always has the same name. But in a denormalized database such inconsistencies are possible, so special care needs to be taken when it comes to avoiding inconsistencies, which will take up programming time to get right and will still cause bugs which will cost you in customer satisfaction. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/316812",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/218080/"
]
} |
316,822 | In every place I've looked, it says that double is superior to float in almost every way. float has been made obsolete by double in Java, so why is it still used? I program a lot with Libgdx, and they force you to use float (deltaTime, etc.), but it seems to me that double is just easier to work with in terms of storage and memory. I also read When do you use float and when do you use double , but if float is really only good for numbers with a lot of digits after the decimal point, then why can't we just use one of the many variations of double ? Is there any reason as to why people insist on using floats even though it doesn't really have any advantages anymore? Is it just too much work to change it all? | LibGDX is a framework mostly used for game development. In game development you usually have to do a whole lot of number crunching in real-time and any performance you can get matters. That's why game developers usually use float whenever float precision is good enough. The size of the FPU registers in the CPU is not the only thing you need to consider in this case. In fact most of the heavy number crunching in game development is done by the GPU, and GPUs are usually optimized for floats, not doubles . And then there is also: memory bus bandwidth (how fast you can shovel data between RAM, CPU and GPU) CPU cache (which makes the previous less necessary) RAM VRAM which are all precious resources of which you get twice as much when you use 32bit float instead of 64bit double. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/316822",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/213588/"
]
} |
316,893 | If my class implements an interface then can I say that I'm following inheritance? I know that when a class extends another class then it's inheritance. | UPDATE: I've revised this answer. A number of good points were raised in the comments that deserved calling out. If my class implements an interface then can I say that I'm following inheritance? It is not entirely clear what you mean by "following inheritance". Let's ask a slightly different question? What is inheritance? When members of one type X are considered to be members of another type Y, those members of Y are inherited from X . There is an inheritance relationship between some types; that is, for some types X and Y we say "Y inherits from X". These are subtly different. That is unfortunate because it is confusing. What confusions typically arise from this subtle distinction? Confusion may arise because people think of inheritance as a mechanism for sharing implementation details. Though it is such a mechanism, that mechanism works by sharing members . Those members need not have implementations! As we will see, they can be abstract. I personally would be happier if the Java and C# specifications used a word other than "inherits" to describe the relationship between interface methods and classes, to avoid this confusion. But they do not, and we have to reason from the specifications, not against them. In Java, are interface members inherited by classes which implement them? Yes, some are. See the Java specification section 8.4.8, which I quote here for your convenience. A class C inherits from its direct superclass and direct superinterfaces all abstract and default methods m for which all of the following are true: [...] If you say that a class implements an interface then the class inherits the abstract and default methods of that interface . (Of course I have omitted the conditions which follow; see the specification for details. In particular, a class which implements a member of an interface is not considered to have inherited that member. Again, is this confusing? Yes.) Do we typically say in Java that a class inherits from an interface? Typically we would say that a class implements an interface. As noted above, a class may inherit members from an interface, and yet still not be said to inherit from the interface. Which is confusing, yes. Does this subtle distinction matter in day-to-day work? Typically not. This sort of narrow parsing of the specification is more useful to compiler writers than line of business developers. Its more important to understand when to use an interface than it is to get a precise definition of "inherits from". | {
"source": [
"https://softwareengineering.stackexchange.com/questions/316893",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/221644/"
]
} |
316,969 | If I want to compare two numbers (or other well-ordered entities), I would do so with x < y . If I want to compare three of them, the high-school algebra student will suggest trying x < y < z . The programmer in me will then respond with "no, that's not valid, you have to do x < y && y < z ". Most languages I've come across don't seem to support this syntax, which is odd given how common it is in mathematics. Python is a notable exception. JavaScript looks like an exception, but it's really just an unfortunate by-product of operator precedence and implicit conversions; in node.js, 1 < 3 < 2 evaluates to true , because it's really (1 < 3) < 2 === true < 2 === 1 < 2 . So, my question is this: Why is x < y < z not commonly available in programming languages, with the expected semantics? | These are binary operators, which when chained, normally and naturally produce an abstract syntax tree like: When evaluated (which you do from the leaves up), this produces a boolean result from x < y , then you get a type error trying to do boolean < z . In order for x < y < z to work as you discussed, you have to create a special case in the compiler to produce a syntax tree like: Not that it isn't possible to do this. It obviously is, but it adds some complexity to the parser for a case that doesn't really come up that often. You're basically creating a symbol that sometimes acts like a binary operator and sometimes effectively acts like a ternary operator, with all the implications of error handling and such that entails. That adds a lot of space for things to go wrong that language designers would rather avoid if possible. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/316969",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/95881/"
]
} |
317,055 | I don't have a formal computer science education, meaning that I did not study computer science topics in a university. However, I work at a programming job and write a reasonable amount of code. Naturally, programming means that I also have to document and comment my code. The problem I seem to be having is deciding between two schools of thoughts: Comment the hell out of your code e.g. unsigned int subtractor (int subtractee, int subtractor)
{
/*
- This function is a subractor and is used to subtract one integer
from another.
- In other words the case here is subtractee - subtractor.
- Please ensure that the subractee is larger then the subtractor
because this function will have undefined behaviour for negative
answers.
*/
(The code here...)
} The code that explains itself let it be (no need to comment) like in this example. Just use good meaningful names. Perhaps this example code makes one think that option (2) is better, but programmers with experience on large projects know that sometimes only meaningful names are not good enough. I know some commenting at appropriate places is a good practice but is it also a good idea to comment like in (1) to describe almost all major methods/functions? The reason I ask is that a senior colleague at work told me to do as in (1) but now I am reading Clean Code by Robert C. Martin and it actually pretty simply states that (1) is a bad practice. There are many other questions about commenting on this website but this question is different from others on this website because I am asking about a specific way of commenting i.e. (1). UPDATE: What is the downside of writing more comments, this way a complete newbie can also understand it and also an advanced programmer (if he/she wants to read the comments anyway). But I understand that the advanced programmer will probably get a headache because he/she might read what they already know or can figure it out from the code anyway. Another downside I can foresee is that the more you comment the more the chance that it might contain errors which can lead to confusions, but other good reasons we might have to comment miserly? | I always tell my developers to comment the "Why?" not the "What?" or "How?". I can always figure out what something is doing from the code. But it is much more difficult to figure out why it was done. So many times I go to fix a bug and find that the code that causes the bug seems to be deliberate. I then have to worry what other behavior (that is currently relied on) will break as I fix this bug. A comment saying what the code does is of little use to me at that point, but a comment saying why the code is there is very useful. For maintainers (which is really who we put comments in for), " why " comments are really what is useful. UPDATE: Some also say to comment "What/How" on your tricky code. (Things that would be hard to figure out.) I agree with that as long as you cannot make the code "non-tricky" and still fulfill the requirements. An example of this is if you had to make some code harder to read in order to meet a performance requirement of the application. UPDATE II: "What" comments can easily atrophy as maintainers change the code and not the comments. This is why copious amounts of comments can be a bad thing. "Why" comments don't usually atrophy because even if the need/reason for some code/functionality changes, the "why" comment still tells the maintainer what the reason was at the time of coding (and leads to knowing if it is still needed). | {
"source": [
"https://softwareengineering.stackexchange.com/questions/317055",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/226514/"
]
} |
317,245 | Say we have a normal pure function such as function add(a, b) {
return a + b
} And then we alter it such that it has a side effect function add(a, b) {
writeToDatabase(Math.random())
return a + b;
} It's not considered a pure function as far as I know because I often hear people call pure functions "functions without side effects." However, it does behave like a pure function as far as the fact that it will return the same output for the same inputs. Is there a different name for this type of function, is it unnamed, or is it still actually pure and I'm mistaken about the definition of purity? | I'm not sure about universal definitions of purity, but from the point of view of Haskell (a language where programmers tend to care about things such as purity and referential transparency), only the first of your functions is "pure". The second version of add isn't pure . So in answer to your question, I'd call it "impure" ;) According to this definition, a pure function is a function that: Only depends on its input. That is, given the same input, it will always return the same output. Is referentially transparent: the function can be freely replaced by its value and the "behavior" of the program will not change. With this definition, it's clear your second function cannot be considered pure, since it breaks rule 2. That is, the following two programs are NOT equivalent: function f(a, b) {
return add(a, b) + add(a, b);
} and function g(a, b) {
c = add(a, b);
return c + c;
} This is because even though both functions will return the same value, function f will write to the database twice but g will write once! It's very likely that writes to the database are part of the observable behavior of your program, in which case I've shown your second version of add isn't "pure". If writes to the database aren't an observable part of your program's behavior, then both versions of add can be considered equivalent and pure. But I can't think of a scenario where writing to the database doesn't matter. Even logging matters! | {
"source": [
"https://softwareengineering.stackexchange.com/questions/317245",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/176462/"
]
} |
317,495 | My current understanding of Inheritance implementation is that one should only extend a class if an IS-A relation is present. If the parent class can further have more specific child types with different functionality but will share common elements abstracted in the parent. I'm questioning that understanding because of what my Java professor is recommending us to do. He has recommended that for a JSwing application we are building in class One should extend all JSwing classes ( JFrame , JButton , JTextBox ,etc) into separate custom classes and specify GUI related customisation in them (like the component size, component label, etc) So far so good, but he further goes on to advise that every JButton should have its own custom extended class even though the only distinguishing factor is their label. For e.g. If the GUI has two buttons Okay and Cancel . He recommends they should be extended as below: class OkayButton extends JButton{
MainUI mui;
public OkayButton(MainUI mui) {
setSize(80,60);
setText("Okay");
this.mui = mui;
mui.add(this);
}
}
class CancelButton extends JButton{
MainUI mui;
public CancelButton(MainUI mui) {
setSize(80,60);
setText("Cancel");
this.mui = mui;
mui.add(this);
}
} As you can see the only difference is in the setText function. So is this standard practice? Btw, the course where this was discussed is called Best Programming Practices in Java [Reply from the Prof] So I discussed the problem with the professor and raised all the points mentioned in the answers. His justification is that subclassing provides reusable code while following GUI design standards. For instance if the developer has used custom Okay and Cancel buttons in one Window, it will be easier to place the same buttons in other Windows as well. I get the reason I suppose, but still it's just exploiting inheritance and making code fragile. Later on, any developer could accidently call the setText on an Okay button and change it. The subclass just becomes nuisance in that case. | It's completely terrible in every possible way. At most , use a factory function to produce JButtons. You should only inherit from them if you have some serious extension needs. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/317495",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/205094/"
]
} |
317,558 | I've recently deleted a java answer of mine on Code Review , that started like this: private Person(PersonBuilder builder) { Stop. Red flag. A PersonBuilder would build a Person; it knows about a Person. The Person class shouldn't know anything about a PersonBuilder - it's just an immutable type. You've created a circular coupling here, where A depends on B which depends on A. The Person should just intake its parameters; a client that's willing to create a Person without building it should be able to do that. I was slapped with a downvote, and told that (quoting) Red flag, why? The implementation here has the same shape that which Joshua Bloch's demonstrated in his "Effective Java" book(item #2). So, it appears that the One Right Way of implementing a Builder Pattern in Java is to make the builder a nested type (this isn't what this question is about though), and then make the product (the class of the object being built) take a dependency on the builder , like this: private StreetMap(Builder builder) {
// Required parameters
origin = builder.origin;
destination = builder.destination;
// Optional parameters
waterColor = builder.waterColor;
landColor = builder.landColor;
highTrafficColor = builder.highTrafficColor;
mediumTrafficColor = builder.mediumTrafficColor;
lowTrafficColor = builder.lowTrafficColor;
} https://en.wikipedia.org/wiki/Builder_pattern#Java_example The same Wikipedia page for the same Builder pattern has a wildly different (and much more flexible) implementation for C#: //Represents a product created by the builder
public class Car
{
public Car()
{
}
public int Wheels { get; set; }
public string Colour { get; set; }
} As you can see, the product here does not know anything about a Builder class, and for all it cares it could be instantiated by a direct constructor call, an abstract factory, ...or a builder - as far as I understand it, the product of a creational pattern never needs to know anything about what's creating it. I've been served the counter-argument (which is apparently explicitly defended in Bloch's book) that a builder pattern could be used to rework a type that would have a constructor bloated with dozens of optional arguments. So instead of sticking to what I thought I knew I researched a bit on this site, and found that as I suspected, this argument does not hold water . So what's the deal? Why come up with over-engineered solutions to a problem that shouldn't even exist in the first place? If we take Joshua Bloch off his pedestal for a minute, can we come up with one single good, valid reason for coupling two concrete types and calling it a best practice? This all reeks of cargo-cult programming to me. | I disagree with your assertion that it is a red flag. I also disagree with your representation of the counterpoint, that there is One Right Way of representing a Builder pattern. To me there's one question: Is the Builder a necessary part of the type's API? Here, is the PersonBuilder a necessary part of the Person API? It is entirely reasonable that a validity-checked, immutable, and/or tightly-encapsulated Person class will necessarily be created through a Builder it provides (regardless of whether that builder is nested or adjacent). In doing so, the Person can keep all of its fields private or package-private and final, and also leave the class open for modification if it is a work in progress. (It may end up closed for modification and open for extension, but that's not important right now, and is especially controversial for immutable data objects.) If it is the case that a Person is necessarily created through a supplied PersonBuilder as part of a single package-level API design, then a circular coupling is just fine and a red flag isn't warranted here. Notably, you stated it as an incontrovertible fact and not an API design opinion or choice, so that may have contributed to a bad response. I wouldn't have downvoted you, but I don't blame someone else for doing so, and I'll leave the rest of the discussion of "what warrants a downvote" to the Code Review help center and meta . Downvotes happen; it's no "slap". Of course, if you want to leave a lot of ways open to build a Person object, then the PersonBuilder could become a consumer or utility of the Person class, on which the Person should not directly depend. This choice seems more flexible—who wouldn't want more object creation options?—but in order to hold to guarantees (like immutability or serializability) suddenly the Person has to expose a different kind of public creation technique, such as a very long constructor. If the Builder is meant to represent or replace a constructor with a lot of optional fields or a constructor open to modification, then the class's long constructor may be an implementation detail better left hidden. (Also bear in mind that an official and necessary Builder doesn't preclude you from writing your own construction utility, just that your construction utility may consume the Builder as an API as in my original question above.) p.s.: I note that the code review sample you listed has an immutable Person, but the counterexample from Wikipedia lists a mutable Car with getters and setters. It may be easier to see the necessary machinery omitted from the Car if you keep the language and invariants consistent between them. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/317558",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/68834/"
]
} |
317,587 | Most of DDD tactical design patterns belong to object-oriented paradigm, and anemic model describes the situation when all business logic is put into services rather than objects thus making them a kind of DTO. In other words anemic model is a synonym of procedural style, which is not advised for complex model. I am not very experienced in pure functional programming, yet I'd like to know how DDD fits into FP paradigm and whether term 'anemic model' still exists in that case. Update : Recentlry published book and video on the subject. And one more video from Scott. | The way the "anemic model" problem is described doesn't translate well to FP as is. First it needs to be suitably generalized. At it's heart, an anemic model is a model which contains knowledge about how to properly use it that isn't encapsulated by the model itself. Instead, that knowledge is spread around a pile of related services. Those services should only be clients of the model, but due to its anemia they're held responsible for it. For example, consider an Account class that can't be used to activate or deactivate accounts or even lookup information about an account unless handled via an AccountManager class. The account should be responsible for basic operations on it, not some external manager class. In functional programming, a similar problem exists when data types don't accurately represent what they're supposed to model. Suppose we need to define a type representing user IDs. An "anemic" definition would state that user IDs are strings. That's technically feasible, but runs into huge problems because user IDs aren't used like arbitrary strings. It makes no sense to concatenate them or slice out substrings of them, Unicode shouldn't really matter, and they should be easily embeddable in URLs and other contexts with strict character and format limitations. Solving this problem usually happens in a few stages. A simple first cut is to say "Well, a UserID is represented equivalently to a string, but they're different types and you can't use one where you expect the other." Haskell (and some other typed functional languages) provides this feature via newtype : newtype UserID = UserID String This defines a UserID function which when given a String constructs a value that is treated like a UserID by the type system, but which is still just a String at runtime. Now functions can declare that they require a UserID instead of a string; using UserID s where you previously were using strings guards against code concatenating two UserID s together. The type system guarantees that can't happen, no tests required. The weakness here is that code can still take any arbitrary String like "hello" and construct a UserID from it. Further steps include creating a "smart constructor" function which when given a string checks some invariants and only returns a UserID if they're satisfied. Then the "dumb" UserID constructor is made private so if a client wants a UserID they must use the smart constructor, thereby preventing malformed UserIDs from coming into existence. Even further steps define the UserID data type in such a way that it's impossible to construct one that's malformed or "improper", simply by definition. For instance, defining a UserID as a list of digits: data Digit = Zero | One | Two | Three | Four | Five | Six | Seven | Eight | Nine
data UserID = UserID [Digit] To construct a UserID a list of digits must be provided. Given this definition, it's trivial to show that it's impossible for a UserID to exist that can't be represented in a URL. Defining data models like this in Haskell is often aided by advanced type system features like Data Kinds and Generalized Algebraic Data Types (GADTs) , which allow the type system to define and prove more invariants about your code. When data is decoupled from behavior your data definition is the only means you have to enforce behavior. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/317587",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/71962/"
]
} |
317,670 | In today's cross-platform C++ (or C) world we have : Data model | short | int | long | long long | pointers/size_t | Sample operating systems
...
LLP64/IL32P64 16 32 32 64 64 Microsoft Windows (x86-64 and IA-64)
LP64/I32LP64 16 32 64 64 64 Most Unix and Unix-like systems, e.g. Solaris, Linux, BSD, and OS X; z/OS
... What this means today, is that for any "common" (signed) integer, int will suffice and can possibly still be used as the default integer type when writing C++ application code. It will also - for current practical purposes - have a consistent size across platforms. Iff a use case requires at least 64 bits, we can today use long long , though possibly using one of the bitness-specifying types or the __int64 type might make more sense. This leaves long in the middle, and we're considering outright banning the use of long from our application code . Would this make sense , or is there a case for using long in modern C++ (or C) code that has to run cross platform? (platform being desktop, mobile devices, but not things like microcontrollers, DSPs etc.) Possibly interesting background links: What does the C++ standard state the size of int, long type to be? Why did the Win64 team choose the LLP64 model? 64-Bit Programming Models: Why LP64? (somewhat aged) Is long guaranteed to be at least 32 bits? (This addresses the comment discussion below. Answer .) | As you mention in your question, modern software is all about interoperating between platforms and systems on the internet. The C and C++ standards give ranges for integer type sizes, not specific sizes (in contrast with languages like Java and C#). To ensure that your software compiled on different platforms works with the same data the same way and to ensure that other software can interact with your software using the same sizes, you should be using fixed-size integers. Enter <cstdint> which provides exactly that and is a standard header that all compiler and standard library platforms are required to provide. Note: this header was only required as of C++11, but many older library implementations provided it anyway. Want a 64 bit unsigned integer? Use uint64_t . Signed 32 bit integer? Use int32_t . While the types in the header are optional, modern platforms should support all of the types defined in that header. Sometimes a specific bit width is needed, for example, in a data structure used for communicating with other systems. Other times it is not. For less strict situations, <cstdint> provides types that are a minimum width. There are least variants: int_leastXX_t will be an integer type of minimum XX bits. It will use the smallest type that provides XX bits, but the type is allowed to be larger than the specified number of bits. In practice, these are typically the same as the types described above that give exact number of bits. There are also fast variants: int_fastXX_t is at least XX bits, but should use a type that performs fast on a particular platform. The definition of "fast" in this context is unspecified. However, in practice, this typically means that a type smaller than a CPU's register size may alias to a type of the CPU's register size. For example, Visual C++ 2015's header specifies that int_fast16_t is a 32 bit integer because 32 bit arithmetic is overall faster on x86 than 16 bit arithmetic. This is all important because you should be able to use types that can hold the results of calculations your program performs regardless of platform. If a program produces correct results on one platform but incorrect results on another due to differences in integer overflow, that is bad. By using the standard integer types, you guarantee that the results on different platforms will be the same with regards to the size of integers used (of course there could be other differences between platforms besides integer width). So yes, long should be banned from modern C++ code. So should int , short , and long long . | {
"source": [
"https://softwareengineering.stackexchange.com/questions/317670",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/6559/"
]
} |
317,786 | Should I write unit tests for complex regular expressions in my application? On the one hand: they are easy to test because input and output format is often simple and well-defined, and they can often become so complex so tests of them specifically are valuable. On the other hand: they themselves are seldom part of the interface of some unit. It might be better to only test the interface and do that in a way that implicitly tests the regexes. EDIT: I agree with Doc Brown who in his comment notes that this is a special case of unit testing of internal components . But as internal components regexes have a few special characteristics: A single line regex can be really complex without really being a separate module. Regexes map input to output without any side effects and hence are really easy to test separately. | Testing dogmatism aside, the real question is whether it provides value to unit test complex regular expressions. It seems pretty clear that it does provide value (regardless of whether the regex is part of a public interface) if the regex is complex enough, since it allows you to find and reproduce bugs and prevent against regressions. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/317786",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/35437/"
]
} |
317,873 | Questions I am trying to understand whether Rust fundamentally and sufficiently improves upon the concurrency facilities of C++ so that to decide if I should spend the time to learn Rust. Specifically, how does idiomatic Rust improve upon, or at any rate diverge from, the concurrency facilities of idiomatic C++? Is the improvement (or divergence) mostly syntactical, or is it substantially an improvement (divergence) in paradigm? Or is it something else? Or is it not really an improvement (divergence) at all? Rationale I have recently been trying to teach myself C++14's concurrency facilities, and something feels not quite right. Something feels off. What feels off? Hard to say. It feels almost as though the compiler were not really trying to help me to write correct programs when it comes to concurrency. It feels almost as though I were using an assembler rather than a compiler. Admittedly, it is entirely probable that I yet suffer from a subtle, faulty concept when it comes to concurrency. Maybe I do not yet grok Bartosz Milewski's tension between stateful programming and data races. Maybe I don't quite understand how much of sound concurrent methodology is in the compiler and how much of it is in the OS. | A better concurrency story is one of the main goals of the Rust project, so improvements should be expected, provided we trust the project to achieve its goals. Full disclaimer: I have a high opinion of Rust and am invested in it. As requested, I'll try to avoid value judgements and describe differences rather than (IMHO) improvements . Safe and unsafe Rust "Rust" is composed of two languages: One that tries very hard to isolate you from the dangers of systems programming, and a more powerful one without any such aspirations. Unsafe Rust is a nasty, brutish language that feels a lot like C++. It allows you to do arbitrarily dangerous things, talk to the hardware, (mis-)manage memory manually, shoot yourself in the foot, etc. It is very much like C and C++ in that the correctness of the program is ultimately in your hands and the hands of all other programmers involved in it.
You opt into this language with the keyword unsafe , and as in C and C++, a single mistake in a single location can bring the whole project crashing down. Safe Rust is the "default", the vast vast majority of Rust code is safe, and if you never mention the keyword unsafe in your code, you never leave the safe language. The rest of the post will mostly concern itself with that language, because unsafe code can break any and all of the guarantees that safe Rust works so hard to give you. On the flip side, unsafe code is not evil and not treated as such by the community (it is, however, strongly discouraged when not necessary). It's dangerous, yes, but also important, because it allows building the abstractions that safe code uses. Good unsafe code uses the type system to prevent others from misusing it, and therefore the presence of unsafe code in a Rust program need not disturb the safe code. All the following differences exist because Rust's type systems has tools that C++'s doesn't have, and because the unsafe code that implements the concurrency abstractions uses these tools effectively. Non-difference: Shared/mutable memory Although Rust places more emphasis on message passing and very strictly controls shared memory, it does not rule out shared memory concurrency and explicitly supports the common abstractions (locks, atomic operations, condition variables, concurrent collections). Moreover, like C++ and unlike functional languages, Rust really likes traditional imperative data structures. There's no persistent/immutable linked list in the standard library. There's std::collections::LinkedList but it's like std::list in C++ and discouraged for the same reasons as std::list (bad use of cache). However, with reference to the title of this section ("shared/mutable memory"), Rust has one difference to C++:
It strongly encourages that memory be "shared XOR mutable", i.e., that memory is never shared and mutable at the same time.
Mutate memory as you like "in the privacy of your own thread", so to speak.
Contrast this with C++ where shared mutable memory is the default option and widely used. While the shared-xor-mutable paradigm is very important to the below differences, it is also a quite different programming paradigm that takes a while to get used to, and that places significant restrictions.
Occasionally one has to opt out of this paradigm, e.g., with atomic types ( AtomicUsize is the essence of shared mutable memory).
Note that locks also obey the shared-xor-mutable rule, because it rules out concurrent reads and writes (while one thread writes, no other threads can read or write). Non-difference: Data races are undefined behavior (UB) If you trigger a data race in Rust code, it's game over, just as in C++. All bets are off and the compiler can do whatever it pleases. However, it is a hard guarantee that safe Rust code does not have data races (or any UB for that matter).
This extends both to the core language and to the standard library.
If you can write a Rust program that doesn't use unsafe (including in third party libraries but excluding the standard library) which triggers UB, then that is considered a bug and will be fixed (this has already happened several times). This if of course in stark contrast to C++, where it's trivial to write programs with UB. Difference: Strict locking discipline Unlike C++, a lock in Rust ( std::sync::Mutex , std::sync::RwLock , etc.) owns the data it's protecting. Instead of taking a lock and then manipulating some shared memory that is associated to the lock only in the documentation, the shared data is inaccessible while you don't hold the lock. A RAII guard keeps the lock and simultaneously gives access to the locked data (this much could be implemented by C++, but isn't by the std:: locks). The lifetime system ensures that you can't keep accessing the data after you release the lock (drop the RAII guard). You can of course have a lock that contains no useful data ( Mutex<()> ), and just share some memory without explicitly associating it with that lock. However, having potentially unsynchronized shared memory requires unsafe . Difference: Prevention of accidental sharing Although you can have shared memory, you only share when you explicitly ask for it.
For example, when you use message passing (e.g. the channels from std::sync ), the lifetime system ensures that you don't keep any references to the data after you sent it to another thread. To share data behind a lock, you explicitly construct the lock and give it to another thread. To share unsynchronized memory with unsafe you, well, have to use unsafe . This ties into the next point: Difference: Thread-safety tracking Rust's type system tracks some notion of thread safety. Specifically, the Sync trait denotes types that can be shared by several threads without risk of data races, while Send marks those that can be moved from one thread to another. This is enforced by the compiler throughout the program, and thus library designers dare make optimizations that would be stupidly dangerous without these static checks. For example, C++'s std::shared_ptr which always uses atomic operations to manipulate its reference count, to avoid UB if a shared_ptr happens to be used by several threads. Rust has Rc and Arc , which differ only in that Rc uses non-atomic refcount operations and isn't threadsafe (i.e. doesn't implement Sync or Send ) while Arc is very much like shared_ptr (and implements both traits). Note that if a type doesn't use unsafe to manually implement synchronization, the presence or absence of the traits are inferred correctly. Difference: Very strict rules If the compiler cannot be absolutely sure that some code is free from data races and other UB, it will not compile, period . The aforementioned rules and other tools can get you quite far, but sooner or later you will want to do something that's correct, but for subtle reasons that escape the compiler's notice. It could be a tricky lock-free data structure, but it could also be something as mundane as "I write to random locations in a shared array but the indices are computed such that every location is written to by only one thread". At that point you can either bite the bullet and add a bit of unnecessary synchronization, or you reword the code such that the compiler can see its correctness (often doable, sometimes quite hard, occasionally impossible), or you drop into unsafe code. Still, it's extra mental overhead, and Rust does not give you any guarantees for the correctness of the unsafe code. Difference: Fewer tools Because of the aforementioned differences, in Rust it's much more rare that one writes code that may have a data race (or a use after free, or a double free, or ...).
While this is nice, it has the unfortunate side effect that the ecosystem for tracking down such errors is even more underdeveloped than one would expect given the youth and small size of the community. While tools like valgrind and LLVM's thread sanitizer could in principle be applied to Rust code, whether this actually works yet varies from tool to tool (and even those that work may be hard to set up, especially since you may not find any up-to-date resources on how to do it).
It doesn't really help that Rust currently lacks a real specification and in particular a formal memory model. In short, writing unsafe Rust code correctly is harder than writing C++ code correctly, despite both languages being roughly comparable in terms of capabilities and risks. Of course this must be weighted against the fact that a typical Rust program will contain only a relatively small fraction of unsafe code, whereas a C++ program is, well, fully C++. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/317873",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/53118/"
]
} |
317,897 | Example case: You have a truck, that can hold 2.8m by 3.2m by 16m of storage capacity.
You you get a bunch of physical objects you want to store in that truck.
These objects are not always cubic, but sometimes rounded, or have concave shapes. The problem to solve I want to implement an algorithm, that finds the optimal way to put 3D objects of any shape into 3D space, without intersecting and using up the least amount of space (volume) and possibly other criteria(high fragility not to be put under high mass for example).
I'm not specifically asking on how to do it(although that would be great), but at least know how the process of working out the optimal layout is called, so I can do further research myself. The closest thing I've found was related to using least amount of leather for making shoes. There was a presentation that referenced a software doing that, but it's no longer available. Also adding a 3rd dimension to this doesn't seem trivial to me. I think I'm either searching for the wrong keywords on this, or there hasn't been done a lot of research on these kinds of problems(which seems unlikely to me, since I've heard of a friend it's a subject of engineering) Final questions How do you call the process of finding the optimal layout? Is there a generic approach to this type of problem(and what is it)? | A better concurrency story is one of the main goals of the Rust project, so improvements should be expected, provided we trust the project to achieve its goals. Full disclaimer: I have a high opinion of Rust and am invested in it. As requested, I'll try to avoid value judgements and describe differences rather than (IMHO) improvements . Safe and unsafe Rust "Rust" is composed of two languages: One that tries very hard to isolate you from the dangers of systems programming, and a more powerful one without any such aspirations. Unsafe Rust is a nasty, brutish language that feels a lot like C++. It allows you to do arbitrarily dangerous things, talk to the hardware, (mis-)manage memory manually, shoot yourself in the foot, etc. It is very much like C and C++ in that the correctness of the program is ultimately in your hands and the hands of all other programmers involved in it.
You opt into this language with the keyword unsafe , and as in C and C++, a single mistake in a single location can bring the whole project crashing down. Safe Rust is the "default", the vast vast majority of Rust code is safe, and if you never mention the keyword unsafe in your code, you never leave the safe language. The rest of the post will mostly concern itself with that language, because unsafe code can break any and all of the guarantees that safe Rust works so hard to give you. On the flip side, unsafe code is not evil and not treated as such by the community (it is, however, strongly discouraged when not necessary). It's dangerous, yes, but also important, because it allows building the abstractions that safe code uses. Good unsafe code uses the type system to prevent others from misusing it, and therefore the presence of unsafe code in a Rust program need not disturb the safe code. All the following differences exist because Rust's type systems has tools that C++'s doesn't have, and because the unsafe code that implements the concurrency abstractions uses these tools effectively. Non-difference: Shared/mutable memory Although Rust places more emphasis on message passing and very strictly controls shared memory, it does not rule out shared memory concurrency and explicitly supports the common abstractions (locks, atomic operations, condition variables, concurrent collections). Moreover, like C++ and unlike functional languages, Rust really likes traditional imperative data structures. There's no persistent/immutable linked list in the standard library. There's std::collections::LinkedList but it's like std::list in C++ and discouraged for the same reasons as std::list (bad use of cache). However, with reference to the title of this section ("shared/mutable memory"), Rust has one difference to C++:
It strongly encourages that memory be "shared XOR mutable", i.e., that memory is never shared and mutable at the same time.
Mutate memory as you like "in the privacy of your own thread", so to speak.
Contrast this with C++ where shared mutable memory is the default option and widely used. While the shared-xor-mutable paradigm is very important to the below differences, it is also a quite different programming paradigm that takes a while to get used to, and that places significant restrictions.
Occasionally one has to opt out of this paradigm, e.g., with atomic types ( AtomicUsize is the essence of shared mutable memory).
Note that locks also obey the shared-xor-mutable rule, because it rules out concurrent reads and writes (while one thread writes, no other threads can read or write). Non-difference: Data races are undefined behavior (UB) If you trigger a data race in Rust code, it's game over, just as in C++. All bets are off and the compiler can do whatever it pleases. However, it is a hard guarantee that safe Rust code does not have data races (or any UB for that matter).
This extends both to the core language and to the standard library.
If you can write a Rust program that doesn't use unsafe (including in third party libraries but excluding the standard library) which triggers UB, then that is considered a bug and will be fixed (this has already happened several times). This if of course in stark contrast to C++, where it's trivial to write programs with UB. Difference: Strict locking discipline Unlike C++, a lock in Rust ( std::sync::Mutex , std::sync::RwLock , etc.) owns the data it's protecting. Instead of taking a lock and then manipulating some shared memory that is associated to the lock only in the documentation, the shared data is inaccessible while you don't hold the lock. A RAII guard keeps the lock and simultaneously gives access to the locked data (this much could be implemented by C++, but isn't by the std:: locks). The lifetime system ensures that you can't keep accessing the data after you release the lock (drop the RAII guard). You can of course have a lock that contains no useful data ( Mutex<()> ), and just share some memory without explicitly associating it with that lock. However, having potentially unsynchronized shared memory requires unsafe . Difference: Prevention of accidental sharing Although you can have shared memory, you only share when you explicitly ask for it.
For example, when you use message passing (e.g. the channels from std::sync ), the lifetime system ensures that you don't keep any references to the data after you sent it to another thread. To share data behind a lock, you explicitly construct the lock and give it to another thread. To share unsynchronized memory with unsafe you, well, have to use unsafe . This ties into the next point: Difference: Thread-safety tracking Rust's type system tracks some notion of thread safety. Specifically, the Sync trait denotes types that can be shared by several threads without risk of data races, while Send marks those that can be moved from one thread to another. This is enforced by the compiler throughout the program, and thus library designers dare make optimizations that would be stupidly dangerous without these static checks. For example, C++'s std::shared_ptr which always uses atomic operations to manipulate its reference count, to avoid UB if a shared_ptr happens to be used by several threads. Rust has Rc and Arc , which differ only in that Rc uses non-atomic refcount operations and isn't threadsafe (i.e. doesn't implement Sync or Send ) while Arc is very much like shared_ptr (and implements both traits). Note that if a type doesn't use unsafe to manually implement synchronization, the presence or absence of the traits are inferred correctly. Difference: Very strict rules If the compiler cannot be absolutely sure that some code is free from data races and other UB, it will not compile, period . The aforementioned rules and other tools can get you quite far, but sooner or later you will want to do something that's correct, but for subtle reasons that escape the compiler's notice. It could be a tricky lock-free data structure, but it could also be something as mundane as "I write to random locations in a shared array but the indices are computed such that every location is written to by only one thread". At that point you can either bite the bullet and add a bit of unnecessary synchronization, or you reword the code such that the compiler can see its correctness (often doable, sometimes quite hard, occasionally impossible), or you drop into unsafe code. Still, it's extra mental overhead, and Rust does not give you any guarantees for the correctness of the unsafe code. Difference: Fewer tools Because of the aforementioned differences, in Rust it's much more rare that one writes code that may have a data race (or a use after free, or a double free, or ...).
While this is nice, it has the unfortunate side effect that the ecosystem for tracking down such errors is even more underdeveloped than one would expect given the youth and small size of the community. While tools like valgrind and LLVM's thread sanitizer could in principle be applied to Rust code, whether this actually works yet varies from tool to tool (and even those that work may be hard to set up, especially since you may not find any up-to-date resources on how to do it).
It doesn't really help that Rust currently lacks a real specification and in particular a formal memory model. In short, writing unsafe Rust code correctly is harder than writing C++ code correctly, despite both languages being roughly comparable in terms of capabilities and risks. Of course this must be weighted against the fact that a typical Rust program will contain only a relatively small fraction of unsafe code, whereas a C++ program is, well, fully C++. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/317897",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/227622/"
]
} |
317,901 | I am working on a project that involves a "team builder" type application, if you will using C#. For the sake of simplicity, let us say it involves the user creating a "Team." There are three teams to choose from. Each team has "positions," for example, such as Captain, Shooter and Runner. Each Captain has 3 possible choices, with different attributes such as "Name", "Skill" and "Age". To give more of a visual representation of this: Team1 =>CaptainATeam1 ===>Name ===>Skill ===>Age =>CaptainBTeam1 ===>Name ===>Skill ===>Age =>CaptaniCTeam1 ===>Name ===>Skill ===>Age Team2 =>CaptainATeam2 ===>Name ===>Skill ===>Age =>CaptainBTeam2 ===>Name ===>Skill ===>Age =>CaptaniCTeam2 ===>Name ===>Skill ===>Age Now, all of these attributes would be predefined and never change. So, CaptainATeam1 will ALWAYS be "James", "Skillful", "22". With all of that being said, this information would need to there for run-time usage. This application would not be connected to any type of database of some sort, and would run strictly as a stand alone application. My question is what is the correct way to go about doing this? The current thought I have is storing the data in a package with the application in the form of a flat-file for each team and position and having the application read them to memory at run time when needed. But I have also considered creating the datasets within individual classes as well, with something similar to this (not tested and written quickly on Notepad, but the concept is there): class Team1
var Captain;
var Shooter;
var Runner;
DataSet ds_team1 = new DataSet();
//used to populate a dataset to be used for a DropDown style selection list
public void PopulateCaptains()
{
DataTable dt_caps = new DataTable("Capitans");
dt_caps.Columns.Add("Name");
dt_caps.Columns.Add("Skill");
dt_caps.Columns.Add("Age");
DataRow dr_cap1 = new DataRow("Cap1");
DataRow dr_cap2 = new DataRow("Cap2");
DataRow dr_cap3 = new DataRow("Cap3");
dt_caps.Rows.Add(dr_cap1);
dt_caps.Rows.Add(dr_cap2);
dt_caps.Rows.Add(dr_cap1);
dr_cap1["Name"] = "James";
dr_cap1["Skill"] = "Skillfull";
dr_cap1["Age"] = "22";
//so on and so forth.
} Obviously the example above would be very cumbersome coding wise as opposed to just writing a foreach loop through flat files stored in the application's folder. However, that allows for the user to manipulate information in this file, causing the program to break. So, what would be the correct approach to dealing with this? IS there a correct way for dealing with this? Note If this is considered way too much of a subjective or broad question, please let me know so I can either go look for more information, provide more information, or clarify anything. Thank you for you time. | A better concurrency story is one of the main goals of the Rust project, so improvements should be expected, provided we trust the project to achieve its goals. Full disclaimer: I have a high opinion of Rust and am invested in it. As requested, I'll try to avoid value judgements and describe differences rather than (IMHO) improvements . Safe and unsafe Rust "Rust" is composed of two languages: One that tries very hard to isolate you from the dangers of systems programming, and a more powerful one without any such aspirations. Unsafe Rust is a nasty, brutish language that feels a lot like C++. It allows you to do arbitrarily dangerous things, talk to the hardware, (mis-)manage memory manually, shoot yourself in the foot, etc. It is very much like C and C++ in that the correctness of the program is ultimately in your hands and the hands of all other programmers involved in it.
You opt into this language with the keyword unsafe , and as in C and C++, a single mistake in a single location can bring the whole project crashing down. Safe Rust is the "default", the vast vast majority of Rust code is safe, and if you never mention the keyword unsafe in your code, you never leave the safe language. The rest of the post will mostly concern itself with that language, because unsafe code can break any and all of the guarantees that safe Rust works so hard to give you. On the flip side, unsafe code is not evil and not treated as such by the community (it is, however, strongly discouraged when not necessary). It's dangerous, yes, but also important, because it allows building the abstractions that safe code uses. Good unsafe code uses the type system to prevent others from misusing it, and therefore the presence of unsafe code in a Rust program need not disturb the safe code. All the following differences exist because Rust's type systems has tools that C++'s doesn't have, and because the unsafe code that implements the concurrency abstractions uses these tools effectively. Non-difference: Shared/mutable memory Although Rust places more emphasis on message passing and very strictly controls shared memory, it does not rule out shared memory concurrency and explicitly supports the common abstractions (locks, atomic operations, condition variables, concurrent collections). Moreover, like C++ and unlike functional languages, Rust really likes traditional imperative data structures. There's no persistent/immutable linked list in the standard library. There's std::collections::LinkedList but it's like std::list in C++ and discouraged for the same reasons as std::list (bad use of cache). However, with reference to the title of this section ("shared/mutable memory"), Rust has one difference to C++:
It strongly encourages that memory be "shared XOR mutable", i.e., that memory is never shared and mutable at the same time.
Mutate memory as you like "in the privacy of your own thread", so to speak.
Contrast this with C++ where shared mutable memory is the default option and widely used. While the shared-xor-mutable paradigm is very important to the below differences, it is also a quite different programming paradigm that takes a while to get used to, and that places significant restrictions.
Occasionally one has to opt out of this paradigm, e.g., with atomic types ( AtomicUsize is the essence of shared mutable memory).
Note that locks also obey the shared-xor-mutable rule, because it rules out concurrent reads and writes (while one thread writes, no other threads can read or write). Non-difference: Data races are undefined behavior (UB) If you trigger a data race in Rust code, it's game over, just as in C++. All bets are off and the compiler can do whatever it pleases. However, it is a hard guarantee that safe Rust code does not have data races (or any UB for that matter).
This extends both to the core language and to the standard library.
If you can write a Rust program that doesn't use unsafe (including in third party libraries but excluding the standard library) which triggers UB, then that is considered a bug and will be fixed (this has already happened several times). This if of course in stark contrast to C++, where it's trivial to write programs with UB. Difference: Strict locking discipline Unlike C++, a lock in Rust ( std::sync::Mutex , std::sync::RwLock , etc.) owns the data it's protecting. Instead of taking a lock and then manipulating some shared memory that is associated to the lock only in the documentation, the shared data is inaccessible while you don't hold the lock. A RAII guard keeps the lock and simultaneously gives access to the locked data (this much could be implemented by C++, but isn't by the std:: locks). The lifetime system ensures that you can't keep accessing the data after you release the lock (drop the RAII guard). You can of course have a lock that contains no useful data ( Mutex<()> ), and just share some memory without explicitly associating it with that lock. However, having potentially unsynchronized shared memory requires unsafe . Difference: Prevention of accidental sharing Although you can have shared memory, you only share when you explicitly ask for it.
For example, when you use message passing (e.g. the channels from std::sync ), the lifetime system ensures that you don't keep any references to the data after you sent it to another thread. To share data behind a lock, you explicitly construct the lock and give it to another thread. To share unsynchronized memory with unsafe you, well, have to use unsafe . This ties into the next point: Difference: Thread-safety tracking Rust's type system tracks some notion of thread safety. Specifically, the Sync trait denotes types that can be shared by several threads without risk of data races, while Send marks those that can be moved from one thread to another. This is enforced by the compiler throughout the program, and thus library designers dare make optimizations that would be stupidly dangerous without these static checks. For example, C++'s std::shared_ptr which always uses atomic operations to manipulate its reference count, to avoid UB if a shared_ptr happens to be used by several threads. Rust has Rc and Arc , which differ only in that Rc uses non-atomic refcount operations and isn't threadsafe (i.e. doesn't implement Sync or Send ) while Arc is very much like shared_ptr (and implements both traits). Note that if a type doesn't use unsafe to manually implement synchronization, the presence or absence of the traits are inferred correctly. Difference: Very strict rules If the compiler cannot be absolutely sure that some code is free from data races and other UB, it will not compile, period . The aforementioned rules and other tools can get you quite far, but sooner or later you will want to do something that's correct, but for subtle reasons that escape the compiler's notice. It could be a tricky lock-free data structure, but it could also be something as mundane as "I write to random locations in a shared array but the indices are computed such that every location is written to by only one thread". At that point you can either bite the bullet and add a bit of unnecessary synchronization, or you reword the code such that the compiler can see its correctness (often doable, sometimes quite hard, occasionally impossible), or you drop into unsafe code. Still, it's extra mental overhead, and Rust does not give you any guarantees for the correctness of the unsafe code. Difference: Fewer tools Because of the aforementioned differences, in Rust it's much more rare that one writes code that may have a data race (or a use after free, or a double free, or ...).
While this is nice, it has the unfortunate side effect that the ecosystem for tracking down such errors is even more underdeveloped than one would expect given the youth and small size of the community. While tools like valgrind and LLVM's thread sanitizer could in principle be applied to Rust code, whether this actually works yet varies from tool to tool (and even those that work may be hard to set up, especially since you may not find any up-to-date resources on how to do it).
It doesn't really help that Rust currently lacks a real specification and in particular a formal memory model. In short, writing unsafe Rust code correctly is harder than writing C++ code correctly, despite both languages being roughly comparable in terms of capabilities and risks. Of course this must be weighted against the fact that a typical Rust program will contain only a relatively small fraction of unsafe code, whereas a C++ program is, well, fully C++. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/317901",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/222133/"
]
} |
317,949 | Let's say I have a MediaPlayer class which has play() and stop() methods. What is the best strategy to use when implementing the stop method in case when the play method has not been called before. I see two options: throw an exception because the player is not in appropriate state, or silently ignore calls to the stop method. What should be the general rule when a method is not supposed to be called in some situation, but it's execution does no harm to the program in general? | There is no rule. It's entirely up to how you want to make your API "feel." Personally, in a music player, I think a transition from the state Stopped to Stopped by means of the method Stop() is a perfectly valid state transition. It's not very meaningful, but it is valid. With this in mind, throwing an exception would seem pedantic and unfair. It would make the API feel like the social equivalent of talking to the annoying kid on the school bus. You're stuck with the annoying situation, but you can deal with it. A more "sociable" approach is to acknowledge that the transition is at worst harmless and be kind to your consuming developers by allowing it. So, if it were up to me, I'd skip the exception. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/317949",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/82879/"
]
} |
317,956 | I've recently poured a couple of hours into JavaScript because I wanted to benefit from the massive userbase. Doing that I have noticed a pattern that most people attribute to dynamic languages. You get things working really quickly, but once your code reaches a certain size you waste much time with type, spelling and refactoring errors in general. Errors a compiler would normally spare me from. And not have me looking for errors in the logic when I just made typo in another module. Considering the incredible following JavaScript and other dynamically typed languages have I am lead to believe that there's something wrong with my approach. Or is this just the price you have to pay? To put it more concisely: How do you approach a JavaScript (or any other dynamic language for that matter) project with ~2000 LOC? Are there tools to prevent me from making those mistakes? I have tried flow by Facebook and JSHint which somewhat help, but don't catch typos. | Specifically speaking of JavaScript, you could use TypeScript instead.
It offers some of the things you are referring to.
Quoting the website: Types enable JavaScript developers to use highly-productive development tools and practices like static checking and code refactoring when developing JavaScript applications. And it is just a superset of JS, meaning some of your existing code will work with TS just fine: TypeScript starts from the same syntax and semantics that millions of JavaScript developers know today. Use existing JavaScript code, incorporate popular JavaScript libraries, and call TypeScript code from JavaScript. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/317956",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/227694/"
]
} |
317,977 | I was reading some code here and saw that an enum is used to store names of html tags. Why do we ever need to do this? What benefit do I get using this strategy? I know that how useful enums are in compiled or statically typed languages but when I see enums in dynamically typed languages I get curious, like the example code I showed above. So, the question basically boils down to why do we need enums in dynamically typed language or do we need them at all? | A benefit is that the compiler can let you know if you accidentally type "ADRESS" or "FEILDSET", and letting you fix it immediately instead of behaving in a nonsensical way at runtime. While the benefit is much more useful in statically typed languages than dynamic, it is still useful even if it is a runtime error as you will get message indicating a problem with your case statement rather than your data. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/317977",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/198298/"
]
} |
318,055 | Extracting functionality into methods or functions is a must for code modularity, readability and interoperability, especially in OOP. But this means more functions calls will be made. How does splitting our code into methods or functions actually impact performance in modern* languages? *The most popular ones: C, Java, C++, C#, Python, JavaScript, Ruby... | Maybe. The compiler might decide "hey, this function is only called a few times, and I'm supposed to optimize for speed, so I'll just inline this function". Essentially, the compiler will replace the function call with the body of the function. For example, the source code would look like this. void DoSomething()
{
a = a + 1;
DoSomethingElse(a);
}
void DoSomethingElse(int a)
{
b = a + 3;
} The compiler decides to inline DoSomethingElse , and the code becomes void DoSomething()
{
a = a + 1;
b = a + 3;
} When functions are not inlined, yes there is a performance hit to make a function call. However, it's such a minuscule hit that only extremely high performance code is going to worry about function calls. And on those kinds of projects, the code is typically written in assembly. Function calls (depending on the platform) typically involve a few 10s of instructions, and that's including saving / restoring the stack. Some function calls consist a jump and return instruction. But there's other things that might impact function call performance. The function being called may not be loaded into the processor's cache, causing a cache miss and forcing the memory controller to grab the function from main RAM. This can cause a big hit for performance. In a nutshell: function calls may or may not impact performance. The only way to tell is to profile your code. Don't try to guess where the slow code spots are, because the compiler and hardware have some incredible tricks up their sleeves. Profile the code to get the location of the slow spots. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/318055",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/223143/"
]
} |
318,124 | The following scenario happened to me several times. I programmed a algorithm that solves a certain problem. It works fine and finds the correct solutions. Now, I want to have an option to tell the algorithm "write a full explanation of how you got to the solution". My goal is to be able to use the algorithm in online demonstrations, tutorial classes, etc. I still want to have an option to run the algorithm in real time, without the explanations. What is a good design pattern to use? EXAMPLE: Suppose I implement this method for finding the greatest common divisor . The current implemented method returns the correct answer, but with no explanations. I want to have an option for the method to explain its actions, like: Initially, a=6 and b=4. The number of 2-factors, d, is initialized to 0.
a and b are both even, so we divide them by 2 and increment d by 1.
Now, a=3 and b=2.
a is odd but b is even, so we divide b by 2.
Now, a=3 and b=1.
a and b are both odd, so we replace a by (a-b)/2 = 1.
Now, a=1 and b=1.
a=b, so the GCD is a*2^d = 2. The output should be returned such that it can be easily displayed both in console and in web-based applications. What is a good pattern to provide explanations when needed, while not hurting the real-time performance of the algorithm when explanations are not needed? | The "pattern" you are looking for is called "logging", just make the logging statements as verbose as you need them. By using a decent logging framework you should be able to switch it on and off at run time, provide different verbosity levels, or tailor the output for different purposes (like web vs. console). If this has a noteable performance impact (even if logging is switched off) will probably depend on the language, the framework and the number of logging statements you need in the specific case. In compiled languages, if this really becomes a problem, you could provide a compiler switch to build "logging variant" and a "non-logging variant" of your code. However, I heavily recommend against optimizing "just in case", without measuring first. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/318124",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/50606/"
]
} |
318,226 | Suppose I have a custom object, Student : public class Student{
public int _id;
public String name;
public int age;
public float score;
} And a class, Window , that is used to show information of a Student : public class Window{
public void showInfo(Student student);
} It looks quite normal, but I found Window is not quite easy to test individually, because it needs a real Student object to call the function. So I try to modify showInfo so that it does not accept a Student object directly: public void showInfo(int _id, String name, int age, float score); so that it is easier to test Window individually: showInfo(123, "abc", 45, 6.7); But I found the modified version has another problems: Modify Student (e.g.:add new properties) requires modifying the method-signature of showInfo If Student had many properties, the method-signature of Student would be very long. So, using custom objects as parameter or accept each property in objects as parameter, which one is more maintainable? | Using a custom object to group related parameters is actually a recommended pattern. As a refactoring, it is called Introduce Parameter Object . Your problem lies elsewhere. First, generic Window should know nothing about Student. Instead, you should have some kind of StudentWindow that knows about only displaying Students . Second, there is absolutely no problem about creating Student instance to test StudentWindow as long as Student doesn't contain any complex logic that would drastically complicate testing of StudentWindow . If it does have that logic then making Student an interface and mocking it should be preferred. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/318226",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/196142/"
]
} |
318,245 | It seems Java has had the power to declare classes not-derivable for ages, and now C++ has it too. However, in the light of the Open/Close principle in SOLID, why would that be useful? To me, the final keyword sounds just like friend - it is legal, but if you are using it, most probably the design is wrong. Please provide some examples where a non-derivable class would be a part of a great architecture or design pattern. | final expresses intent . It tells the user of a class, method or variable "This element is not supposed to change, and if you want to change it, you haven't understood the existing design." This is important because program architecture would be really, really hard if you had to anticipate that every class and every method you ever write might be changed to do something completely different by a subclass. It is much better to decide up-front which elements are supposed to be changeable and which aren't, and to enforce the unchangeablility via final . You could also do this via comments and architecture documents, but it is always better to let the compiler enforce things that it can than to hope that future users will read and obey the documentation. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/318245",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/54268/"
]
} |
318,395 | My colleagues like to say "logging/caching/etc. is a cross-cutting concern" and then proceed using the corresponding singleton everywhere. Yet they love IoC and DI. Is it really a valid excuse to break the SOLI D principle? | No. SOLID exists as guidelines to account for inevitable change. Are you really never going to change your logging library, or target, or filtering, or formatting, or...? Are you really not going to change your caching library, or target, or strategy, or scoping, or...? Of course you are. At the very least , you're going to want to mock these things in a sane way to isolate them for testing. And if you're going to want to isolate them for testing, you're likely going to run into a business reason where you want to isolate them for real life reasons. And you'll then get the argument that the logger itself will handle the change. "Oh, if the target/filtering/formatting/strategy changes, then we'll just change the config!" That is garbage. Not only do you now have a God Object that handles all of these things, you're writing your code in XML (or similar) where you don't get static analysis, you don't get compile time errors, and you don't really get effective unit tests. Are there cases to violate SOLID guidelines? Absolutely. Sometimes things won't change (without requiring a full rewrite anyways). Sometimes a slight violation of LSP is the cleanest solution. Sometimes making an isolated interface provides no value. But logging and caching (and other ubiquitous cross-cutting concerns) aren't those cases. They're usually great examples of the coupling and design problems you get when you ignore the guidelines. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/318395",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/13154/"
]
} |
318,404 | This programming style document has a general rule, that says : The rules can be violated if there are strong personal objections
against them. This collides with the way I am thinking, and there are many articles saying that coding style is actually important. For example this says: A coding standards document tells developers how they must write their
code. Instead of each developer coding in their own preferred style,
they will write all code to the standards outlined in the document.
This makes sure that a large project is coded in a consistent style —
parts are not written differently by different programmers. Not only
does this solution make the code easier to understand, it also ensures
that any developer who looks at the code will know what to expect
throughout the entire application. So, am I misunderstanding something from this document and the quote at the top of this question? Can people really just ignore coding style? Maybe I wasn't clear enough, so with this edit, I am going to clarify a bit. I am writing the coding style document for our team, and I want to check the style using some static analyzers. If it fails, Jenkins will send emails. And I want to fail the code review, if the style doesn't match. This clearly collides with the first quote. But then, if the quote is right, what is the use of the coding style document, if anyone can do whatever they want? | Allowing people to ignore coding styles because of personal preference is a bad idea. The quote in your question seems to allow any developer to simply say "I'm not going to use this style because I don't like it." This goes against the whole point, which is getting everyone on the team to do things the same way for consistency and readability. I agree that a style document with such a statement is probably pointless. Nevertheless, some flexibility in adhering to style guidelines is advisable: Slavishly following a guideline might prevent the best way of writing a particular piece of code . Developers should be able to ignore the guideline, and make a case that what they have done is the best, most readable way to accomplish something in this case. Working with legacy code may require flexibility. It is probably not a good use of your time to restyle a large existing code base. If you rewrite a particular section significantly, you may reformat it into the preferred style. However, if you just make a small change, it may be better to use the code's existing style. Nitpicking about every small violation of the style guide is not a good use of time. The code review is a good time to highlight any code that is significantly out of line with the team's style. However, catching and fixing small style "mistakes" is likely to become busy work that focuses on the wrong thing. Sure, fail the code review if the style was blatantly ignored. But I don't like the idea of running an analyzer and pointing out every misplaced parenthesis or indentation, never mind failing someone on this basis. In conclusion: allow flexibility in adhering to style guidelines, in order to best meet the needs of your team--but not for arbitrary reasons like personal preference. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/318404",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/20065/"
]
} |
318,634 | I've read some answers to questions along a similar line such as "How do you keep your unit tests working when refactoring?". In my case the scenario is slightly different in that
I've been given a project to review and bring in line with some standards we have, currently there are no tests at all for the project! I've identified a number of things I think could have been done better such as NOT mixing DAO type code in a service layer. Before refactoring it seemed like a good idea to write tests for the existing code. The problem it appears to me is that when I do refactor then those tests will break as I'm changing
where certain logic is done and the tests will be written with the previous structure in mind (mocked dependencies etc.) In my case, what would be the best way to procede? I'm tempted to write the tests around the refactored code but I'm aware there is a risk I may refactor things incorrectly that could
change the desired behaviour. Whether this is a refactor or a redesign I'm happy for my understanding of those terms to be corrected, currently I'm working on the following definition for refactoring "With refactoring, by definition, you don't change what your software does, you change how it does it.". So I'm not changing what the software does I'd be changing how/where it does it. Equally I can see the argument that if I'm changing the signature of methods that could be considered a redesign. Here's a brief example MyDocumentService.java (current) public class MyDocumentService {
...
public List<Document> findAllDocuments() {
DataResultSet rs = documentDAO.findAllDocuments();
List<Document> documents = new ArrayList<>();
for(DataObject do: rs.getRows()) {
//get row data create new document add it to
//documents list
}
return documents;
}
} MyDocumentService.java (refactored/redesigned whatever) public class MyDocumentService {
...
public List<Document> findAllDocuments() {
//Code dealing with DataResultSet moved back up to DAO
//DAO now returns a List<Document> instead of a DataResultSet
return documentDAO.findAllDocuments();
}
} | You're looking for tests that check for regressions . i.e. breaking some existing behaviour. I would start by identifying at what level that behaviour will remain the same, and that the interface driving that behaviour will remain the same, and start putting in tests at that point. You now have some tests that will assert that whatever you do below this level, your behaviour remains the same. You're quite right to question how the tests and code can remain in sync. If your interface to a component remains the same, then you can write a test around this and assert the same conditions for both implementations (as you create the new implementation). If it doesn't, then you have to accept that a test for a redundant component is a redundant test. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/318634",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/37655/"
]
} |
319,304 | While I was reading this question , the top voted answer quoted Uncle Bob on coding standards , but I was confused by this tip: Don't write them down if you can avoid it. Rather, let the code be the way the standards are captured. This bounced in my brain, but I couldn't find a place to stick. If a new person joins the team, or coding standards change, couldn't there be confusion of information? Why shouldn't I write down a coding standard? | There are a few reasons. Nobody reads documentation. Nobody follows the documentation even if they do read it. Nobody updates the documentation even if they do read it and follow it. Writing a list of practices is much less effective than creating a culture. Coding standards are not about what people should do, but are about what they actually do. When people deviate from the standards, this should be picked up and changed through a code review process and/or automated tools. Remember, the whole point of coding standards is to make our lives easier. They're a shortcut for our brain so that we can filter out the necessary stuff from the important stuff. It's much better to create a culture of review to enforce this than it is to formalise it in a document. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/319304",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/229531/"
]
} |
319,376 | Which is generally accepted practice between these two cases: function insertIntoDatabase(Account account, Otherthing thing) {
database.insertMethod(account.getId(), thing.getId(), thing.getSomeValue());
} or function insertIntoDatabase(long accountId, long thingId, double someValue) {
database.insertMethod(accountId, thingId, someValue);
} In other words is it generally better to pass entire objects around or just the fields you need? | Neither is generally better than the other. It's a judgment call you have to make on a case-by-case basis. But in practice, when you're in a position that you can actually make this decision, it's because you get to decide which layer in the overall program architecture should be breaking the object up into primitives, so you should be thinking about the whole call stack , not just this one method you're currently in. Presumably the breaking up has to be done somewhere, and it wouldn't make sense (or it'd be needlessly error-prone) to do it more than once. The question is where that one place should be. The easiest way to make this decision is to think about what code should or should not have to be altered if the object gets changed . Let's expand your example slightly: function addWidgetButtonClicked(clickEvent) {
// get form data
// get user's account
insertIntoDatabase(account, data);
}
function insertIntoDatabase(Account account, Otherthing data) {
// open database connection
// check data doesn't already exist
database.insertMethod(account.getId(), data.getId(), data.getSomeValue());
} vs function addWidgetButtonClicked(clickEvent) {
// get form data
// get user's account
insertIntoDatabase(account.getId(), data.getId(), data.getSomeValue());
}
function insertIntoDatabase(long accountId, long dataId, double someValue) {
// open database connection
// check data doesn't already exist
database.insertMethod(accountId, dataId, someValue);
} In the first version, the UI code is blindly passing the data object and it's up to the database code to extract the useful fields from it. In the second version, the UI code is breaking up the data object into its useful fields, and the database code receives them directly without knowing where they came from. The key implication is that, if the structure of the data object were to change in some way, the first version would require only the database code to change, while the second version would require only the UI code to change . Which of those two is correct depends largely on what kind of data the data object contains, but it's usually very obvious. For example, if data is a user-provided string like "20/05/1999", it should be up to the UI code to convert that to a proper Date type before passing it on. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/319376",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/222021/"
]
} |
319,383 | I just ran across this old question asking what's so evil about global state, and the top-voted, accepted answer asserts that you can't trust any code that works with global variables, because some other code somewhere else might come along and modify its value and then you don't know what the behavior of your code will be because the data is different! But when I look at that, I can't help but think that that's a really weak explanation, because how is that any different from working with data stored in a database? When your program is working with data from a database, you don't care if other code in your system is changing it, or even if an entirely different program is changing it, for that matter. You don't care what the data is; that's the entire point. All that matters is that your code deals correctly with the data that it encounters. (Obviously I'm glossing over the often-thorny issue of caching here, but let's ignore that for the moment.) But if the data you're working with is coming from an external source that your code has no control over, such as a database (or user input, or a network socket, or a file, etc...) and there's nothing wrong with that, then how is global data within the code itself--which your program has a much greater degree of control over--somehow a bad thing when it's obviously far less bad than perfectly normal stuff that no one sees as a problem? | First, I'd say that the answer you link to overstates that particular issue and that the primary evil of global state is that it introduces coupling in unpredictable ways that can make it difficult to change the behaviour of your system in future. But delving into this issue further, there are differences between global state in a typical object-oriented application and the state that is held in a database. Briefly, the most important of these are: Object-oriented systems allow replacing an object with a different class of object, as long as it is a subtype of the original type. This allows behaviour to be changed, not just data . Global state in an application does not typically provide the strong consistency guarantees that a database does -- there are no transactions during which you see a consistent state for it, no atomic updates, etc. Additionally, we can see database state as a necessary evil; it is impossible to eliminate it from our systems. Global state, however, is unnecessary. We can entirely eliminate it. So even were the issues with a database just as bad, we can still eliminate some of the potential problems and a partial solution is better than no solution. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/319383",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/935/"
]
} |
319,407 | I use C and struct s where a struct can have members but not functions. Assume for simplicity that I want to create a struct for strings that I name str and I want to be able to do str.replace(int i, char c) where i is the index of the string and c is the character to replace the character at position i . Would this never be possible since structs can't have functions or is there still some way we can implement this behavior and mimic that a struct could have a (simple) function that actually only is the struct copying itself to a new struct and updating its fields, which it could do? So replace could be a third member of the struct that points to a new struct that is updated when it is accessed or similar. Could it be done? Or is there something builtin or some theory or paradigm that prevents my intention? The background is that I'm writing C code and I find myself reinventing functions that I know are library builtins in OOP languages and that OOP would be a good way to manipulate strings and commands. | Your function should look like this. void
replace(struct string * s, int i, char c); This accepts a pointer to the object to operate on as the first parameter. In C++, this is known as the this -pointer and need not be declared explicitly. (Contrast this to Python where it has to.) In order to call your function, you would also pass that pointer explicitly. Basically, you trade the o.f(…) syntax for the f(&o, …) syntax. Not a big deal. The story becomes more involved if you want to support polymorphism (aka virtual functions). It can also be emulated in C (I've shown it for this answer .) but it ain't pretty to do by hand. As Jan Hudec has commented, you should also make it a habit to prefix the function name with the type name (ie string_replace ) because C has no name-spaces so there can only be a single function named replace . | {
"source": [
"https://softwareengineering.stackexchange.com/questions/319407",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/12893/"
]
} |
319,621 | In our company we have a small program (.exe 500Kb size) that does mathematical calculation and in the end it spits out the result on a Excel spreadsheet that we use to continue our workflow. I want to modify the columns, spacing format and add VBA logic etc. on the Excel spreadsheet but since this parameters are not configurable in that program, it seems to me the only way to modify it is to break down/reverse engineer the .exe Nobody knows in what language it was programmed in, the only thing we know is: Developed 20+ years ago Developer retired 10 years ago GUI Application Runs standalone Size 500Kb Any suggestions what options I have to deal with such kind of problems? Is reverse engineering the only option, or is there a better approach? | Reverse engineering can become very hard, even more if you do not just want to understand the program's logic, but change and recompile it. So first thing I would try is to look for a different solution. I want to modify the columns, spacing format and add VBA logic etc. on the Excel spreadsheet If that is the only thing you want, and the calculation done by the program is fine, why not write a program in the language of your choice (maybe an Excel macro) which calls your legacy "exe", takes the output and processes it further. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/319621",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/230056/"
]
} |
319,846 | Does it make sense to talk about "agile development" or claiming that you are applying an "agile methodology" if the code base you are working on has 0% unit test coverage? (And you, as a team, are not doing anything about it). To make it clear: to me, it doesn't make sense. In my personal experience I found that unit tests are the only tool that allows you to really be "agile" (i.e. respond to changes, improve your design, share knowledge, etc...) and TDD is the only practice that takes you there. Maybe there are some other ways, but I still cannot see how they can possibly work. | To be pedantic, nothing in the Agile Manifesto or the Scrum Guide make any reference to technical practices, like unit testing or TDD, at all. So, yes, in theory you could deliver early and often with a focus on collaboration and value without them and call yourself Agile, you might even actually have agility . In practice however, it's nearly impossible to consistently deliver value ( into production ) every few weeks without a good test suite. This includes integration tests as well as unit tests. Unit tests only go so far. There's a reason it's s pyramid and not a rectangle after all. Without the tests as a safety net, you'll either introduce lots of regression bugs in each release, or be terrified of refactoring. Both will greatly impact your ability to continue on at a sustainable pace. If you can't sustain your pace or change course (redesign) when required, then you don't have agility. Agility, after all, is the goal we're striving for. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/319846",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/191940/"
]
} |
320,247 | I hate referencing paywalled content, but this video shows exactly what I'm talking about. Precisely 12 minutes in Robert Martin looks at this: And says "One of my favorite things to do is getting rid of useless braces" as he turns it into this: A long time ago, in an education far far away, I was taught not to do this because it makes it easy to introduce a bug by adding another indented line thinking it's controlled by the if when it's not. To be fair to Uncle Bob, he's refactoring a long Java method down to tiny little functions that, I agree, are far more readable. When he's done changing it, (22.18) it looks like this: I'm wondering if that is supposed to validate removing the braces. I'm already familiar with the best practice . Can Uncle Bob be challenged on this point? Has Uncle Bob defended the idea? | Readability is no small thing. I'm of a mixed mind when it comes to braces that enclose a single method. I personally remove them for things like single-line return statements, but leaving such braces out did in fact bite us very hard at the last place where I worked. Someone added a line of code to an if statement without also adding the necessary braces, and because it was C, it crashed the system without warning. I never challenge anyone who is religious about always using braces after that little fiasco. So I see the benefit of readability, but I am keenly aware of the problems that can arise when you leave those braces out. I wouldn't bother trying to find a study or someone's published opinion. Everybody has one (an opinion, that is), and because it's a stylistic issue, one opinion is just about as good as any other. Think about the issue yourself, evaluate the pros and cons, and make up your own damned mind. If the shop you work for has a coding standard that covers this issue, just follow that. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/320247",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/131624/"
]
} |
321,391 | Here's a skeleton of a class I built that loops through and deduplicates data - it's in C# but the principles of the question aren't language specific. public static void DedupeFile(FileContents fc)
{
BuildNameKeys(fc);
SetExactDuplicates(fc);
FuzzyMatching(fc);
}
// algorithm to calculate fuzzy similarity between surname strings
public static bool SurnameMatch(string surname1, string surname2)
// algorithm to calculate fuzzy similarity between forename strings
public static bool ForenameMatch(string forename1, string forename2)
// algorithm to calculate fuzzy similarity between title strings
public static bool TitleMatch(string title1, string title2)
// used by above fn to recognise that "Mr" isn't the same as "Ms" etc
public static bool MrAndMrs(string title1, string title2)
// gives each row a unique key based on name
public static void BuildNameKeys(FileContents fc)
// loops round data to find exact duplicates
public static void SetExactDuplicates(FileContents fc)
// threads looping round file to find fuzzy duplicates
public static void FuzzyMatching(FileContents fc, int maxParallels = 32) Now, in actual usage only the first function actually needs to be public. All the rest are only used inside this class and nowhere else. Strictly that means they should of course be private. However, I've left them public for ease of unit testing. Some people will no doubt tell me I should be testing them via the public interface but that's partly why I picked this class: it's an excellent example of where that approach gets awkward. The fuzzy matching functions are great candidates for unit tests, and yet a test on that single "public" function would be near-useless. This class won't ever get used outside a small team at this office, and I don't believe that the structural understanding imparted by making the other methods private is worth the extra faff of packing my tests with code to access private methods directly. Is this "all public" approach reasonable for classes in internal software? Or is there a better approach? I am aware there is already a question on How do you unit test private methods? , but this question is about whether there are scenarios where it's worthwhile bypassing those techniques in favour of simply leaving methods public. EDIT: For those interested, I added the full code on CodeReviewSE as restructuring this class seemed too good a learning opportunity to miss. | I've left them public for ease of unit testing Ease of writing those tests, maybe. But you are then tightly coupling that class to a bunch of tests that interact with its inner workings. This results in brittle tests: they will likely break as soon as you make changes to the code. This creates a real maintenance headache, that often results in people simply deleting the tests as they become more trouble than they are worth. Remember, unit testing doesn't mean "testing the smallest possible piece of code". It's a test of a functional unit, ie for a set of inputs into part of the system, I expect these results. That unit might be a static method, a class, or a bunch of classes within an assembly. By targeting public APIs only, you mimic the behaviour of the system and so your tests become both less coupled and more robust. So make the methods private, mock the "whole 'FileContents' DTO" and only test the one true public method. It'll be more work initially, but over time, you'll reap the benefits of creating useful tests like this. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/321391",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/22742/"
]
} |
321,398 | Recently I have come across with another way of writing a function expression, like this: setTimeout(() =>{/*code*/},1000); as opposed to: setTimeout(function(){/*code*/},1000); What is the benefit of using () = > rather than function(){} Can I use it all the time or only in the certain situations? Thanks. | I've left them public for ease of unit testing Ease of writing those tests, maybe. But you are then tightly coupling that class to a bunch of tests that interact with its inner workings. This results in brittle tests: they will likely break as soon as you make changes to the code. This creates a real maintenance headache, that often results in people simply deleting the tests as they become more trouble than they are worth. Remember, unit testing doesn't mean "testing the smallest possible piece of code". It's a test of a functional unit, ie for a set of inputs into part of the system, I expect these results. That unit might be a static method, a class, or a bunch of classes within an assembly. By targeting public APIs only, you mimic the behaviour of the system and so your tests become both less coupled and more robust. So make the methods private, mock the "whole 'FileContents' DTO" and only test the one true public method. It'll be more work initially, but over time, you'll reap the benefits of creating useful tests like this. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/321398",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/232219/"
]
} |
321,399 | Let's say I have a function (written in Ruby, but should be understandable by everyone): def am_I_old_enough?(name = 'filip')
person = Person::API.new(name)
if person.male?
return person.age > 21
else
return person.age > 18
end
end In unit testing I would create four tests to cover all scenarios. Each will use mocked Person::API object with stubbed methods male? and age . Now it comes to writing integration tests. I assume that Person::API should not be mocked any more. So I would create exactly the same four test cases, but without mocking Person::API object. Is that correct? If yes, then what's the point of writing unit tests at all, if I could just write integration tests which give me more confidence (as I work on real objects, not stubs or mocks)? | No, integration tests should not just duplicate the coverage of unit tests. They may duplicate some coverage, but that's not the point. The point of a unit test is to ensure that a specific small bit of functionality works exactly and completely as intended. A unit test for am_i_old_enough would test data with different ages, certainly the ones near the threshold, possibly all occurring human ages. After you've written this test, the integrity of am_i_old_enough should never be in question again. The point of an integration test is to verify that the entire system, or a combination of a substantial number of components does the right thing when used together . The customer doesn't care about a particular utility function you wrote, they care that their web app is properly secured against access by minors, because otherwise the regulators will have their asses. Checking the user's age is one small part of that functionality, but the integration test doesn't check whether your utility function uses the correct threshold value. It tests whether the caller makes the right decision based on that threshold, whether the utility function is called at all, whether other conditions for access are satisfied, etc. The reason we need both types of tests is basically that there is a combinatorial explosion of possible scenarios for the path through a code base that execution may take. If the utility function has about 100 possible inputs, and there are hundreds of utility functions, then checking that the right thing happens in all cases would require many, many millions of test cases. By simply checking all cases in very small scopes and then checking common, relevant or probable combinations f these scopes, while assuming that these small scopes are already correct, as demonstrated by unit testing , we can get a pretty confident assessment that the system is doing what it should, without drowning in alternative scenarios to test. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/321399",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/153782/"
]
} |
321,463 | I am a high school student working on a C# project with a friend of mine with about the same skill level as me. So far, we have written roughly 3,000 lines of code and 250 lines of test code in a span of 100 commits. Due to school, I put off the project for a few months and recently I was able to pick it back up again. By the time I had picked it back up, I understood that the code I had written was poorly designed, such as inclusion of excessive threads in the renderer, poor prevention of race conditions in the interaction between the emulated CPU, GPU, and game cartridge, as well as code that is simply redundant and confusing. The problem is that I have not finished even the main functionality of my program, so I cannot truly refactor, and I feel discouraged to go on perfectly aware that my code's design is flawed. At the same time, I do not want to abandon the project; a substantial amount of progress has been made and the work must not go to waste. I feel as if I have a couple of choices: to simply finish the functionality by working around the poor design and then refactoring once everything is working, halting everything and working toward untangling everything before the rest of the functionality can be finished, starting the project all over so that it is fresh in my mind again, or downright abandoning the project due to its excessive size (essentially "back to the drawing board"). Based on others' experience in such projects, what is the recourse in getting myself back on the right track? From answers on this site, the general consensus is that rewriting is generally unnecessary but an available option if the code cannot be maintained without excessive cost. I genuinely would like to pursue this project, but as it stands, my code is not designed well enough for me to continue, and a feeling of despondency detracts me from continuing. | If I were in your shoes, I would probably try it this way: first, finish the current project - at least partially - as soon as possible, but in a working state . Probably you need to reduce your original goals, think about the minimum functionality you really need to see in "version 1.0". then, and only then think about a rewrite from scratch (lets call this "version 2.0"). Maybe you can reuse some of the code from V1.0. Maybe after sleeping again over the situation you come to the decision you could refactor V1.0 and save most of it. But don't make this decision before you do not have a "proof of concept V1.0" at hand. A working program "1.0" is something you can show to others, even when the code is bad (which noone else will bother about, except yourself). If in the middle of creating V2.0 you realise you are running out of time, you still have V1.0 as a partially success, which will much better for your morale. However, if you do not finish V1.0 first, there is a big chance you will never complete V2.0 because when you halfway through, there will be a point where you are unsatisfied with the design again, and then? Will you abandon V2.0 again and work on V3.0? There is a high risk of running into this never ending circle, never coming to an end. Better take this as an opportunity to learn how to achieve intermediate goals, instead of an opportunity to learn how to leave projects in an unfinished state behind. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/321463",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/151056/"
]
} |
321,547 | I am considering learning C. But why do people use C (or C++) if it can be used 'dangerously'? By dangerous, I mean with pointers and other similar stuff. Like the Stack Overflow question Why is the gets function so dangerous that it should not be used? . Why do programmers not just use Java or Python or another compiled language like Visual Basic? | C predates many of the other languages you're thinking of. A lot of what we now know about how to make programming "safer" comes from experience with languages like C. Many of the safer languages that have come out since C rely on a larger runtime, a more complicated feature set and/or a virtual machine to achieve their goals. As a result, C has remained something of a "lowest common denominator" among all the popular/mainstream languages. C is a much easier language to implement because it's relatively small, and more likely to perform adequately in even the weakest environment, so many embedded systems that need to develop their own compilers and other tools are more likely to be able to provide a functional compiler for C. Because C is so small and so simple, other programming languages tend to communicate with each other using a C-like API. This is likely the main reason why C will never truly die, even if most of us only ever interact with it through wrappers. Many of the "safer" languages that try to improve on C and C++ are not trying to be "systems languages" that give you almost total control over the memory usage and runtime behavior of your program. While it's true that more and more applications these days simply do not need that level of control, there will always be a small handful of cases where it is necessary (particularly inside the virtual machines and browsers that implement all these nice, safe languages for the rest of us). Today, there are a few systems programming languages (Rust, Nim, D, ...) which are safer than C or C++. They have the benefits of hindsight, and realize that most of the times, such fine control is not needed, so offer a generally safe interface with a few unsafe hooks/modes one can switch to when really necessary. Even within C, we've learned a lot of rules and guidelines that tend to drastically reduce the number of insidious bugs that show up in practice. It's generally impossible to get the standard to enforce these rules retroactively because that would break too much existing code, but it is common to use compiler warnings, linters and other static analysis tools to detect these sorts of easily preventable issues. The subset of C programs that pass these tools with flying colors is already far safer than "just C", and any competent C programmer these days will be using some of them. Also, you'll never make an obfuscated Java contest as entertaining as the obfuscated C contest . | {
"source": [
"https://softwareengineering.stackexchange.com/questions/321547",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/232425/"
]
} |
321,679 | I get confused over min and max functions, in certain contexts. In one context, when you're using the functions to take the greater or lesser of two values, there is no issue. For example, //how many autographed CD's can I give out?
int howManyAutographs(int CDs, int Cases, int Pens)
{
//if no pens, then I cannot sign any autographs
if (Pens == 0)
return 0;
//I cannot give away a CD without a case or a case without a CD
return min(CDs, Cases);
} Easy. But in another context, I get confused. If I'm trying to set a maximum or minimum, I get it backwards. //return the sum, with a maximum of 255
int cappedSumWRONG(int x, int y)
{
return max(x + y, 255); //nope, this is wrong
}
//return the sum, with a maximum of 255
int cappedSumCORRECT(int x, int y)
{
return min(x + y, 255); //much better, but counter-intuitive to my mind
} Is it inadvisable to make my own functions as follows? //return x, with a maximum of max
int maximize(int x, int max)
{
return min(x, max);
}
//return x, with a minimum of min
int minimize(int x, int min)
{
return max(x, min)
} Obviously, using the builtins will be faster but this seems like a needless microoptimization to me. Is there any other reason this would be inadvisable? What about in a group project? | As others have already mentioned: don't create a function with a name that is similar to that of a builtin, standard-library or generally widely used function but change its behavior. It is possible to get used to a naming convention even if it doesn't make much sense to you at first sight but it will be impossible to reason about the functioning of your code once you introduce those other functions that do the same thing but have their names swapped. Instead of “overloading” the names used by the standard library, use new names that convey precisely what you mean. In your case, you're not really interested in a “minimum”. Rather, you want to cap a value. Mathematically, this is the same operation but semantically, it is not quite. So why not just a function int cap(int value, int limit) { return (value > limit) ? limit : value; } that does what is needed and tells so from its name. (You could also implement cap in terms of min as shown in timster 's answer ). Another frequently used function name is clamp . It takes three arguments and “clamps” a provided value into the interval defined by the other two values. int clamp(int value, int lower, int upper) {
assert(lower <= upper); // precondition check
if (value < lower) return lower;
else if (value > upper) return upper;
else return value;
} If you're using such a generally known function name, any new person joining your team (including the future you coming back to the code after a while) will quickly understand what is going on instead of cursing you for having confused them by breaking their expectations about function names they thought they knew. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/321679",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/221352/"
]
} |
321,805 | For example, I have a game, which has some tools to increase the ability of the Player: Tool.h
class Tool{
public:
std::string name;
}; And some tools: Sword.h class Sword : public Tool{
public:
Sword(){
this->name="Sword";
}
int attack;
}; Shield.h class Shield : public Tool{
public:
Shield(){
this->name="Shield";
}
int defense;
}; MagicCloth.h class MagicCloth : public Tool{
public:
MagicCloth(){
this->name="MagicCloth";
}
int attack;
int defense;
}; And then a player may hold some tools for attack: class Player{
public:
int attack;
int defense;
vector<Tool*> tools;
void attack(){
//original attack and defense
int currentAttack=this->attack;
int currentDefense=this->defense;
//calculate attack and defense affected by tools
for(Tool* tool : tools){
if(tool->name=="Sword"){
Sword* sword=(Sword*)tool;
currentAttack+=sword->attack;
}else if(tool->name=="Shield"){
Shield* shield=(Shield*)tool;
currentDefense+=shield->defense;
}else if(tool->name=="MagicCloth"){
MagicCloth* magicCloth=(MagicCloth*)tool;
currentAttack+=magicCloth->attack;
currentDefense+=magicCloth->shield;
}
}
//some other functions to start attack
}
}; I think it is difficult to replace if-else with virtual methods in the tools, because each tool has different properties, and each tool affects the player's attack and defense, for which the update of player attack and defense needs to be done inside the Player object. But I was not satisfied with this design, since it contains downcasting, with a long if-else statement. Does this design need to be "corrected"? If so, what can I do to correct it? | Yes, it is a code smell (in lots of cases). I think it is difficult to replace if-else with virtual methods in tools In your example, it is quite simple to replace the if/else by virtual methods: class Tool{
public:
virtual int GetAttack() const=0;
virtual int GetDefense() const=0;
};
class Sword : public Tool{
// ...
public:
virtual int GetAttack() const {return attack;}
virtual int GetDefense() const{return 0;}
}; Now there is no need any more for your if block, the caller can just use it like currentAttack+=tool->GetAttack();
currentDefense+=tool->GetDefense(); Of course, for more complicated situations, such a solution is not always so obvious (but nethertheless almost anytime possible). But if you come to a situation where you do not know how to resolve the case with virtual methods, you can ask a new question again here on "Programmers" (or, if it becomes language or implementation specific, on Stackoverflow). | {
"source": [
"https://softwareengineering.stackexchange.com/questions/321805",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/196142/"
]
} |
321,822 | Here are two links which briefly describe the difference between the two: stateless , stateful In short in the "Stateless" scenario we bind views directly to models, view models just expose the whole objects, not their properties, so we don't need any synchronization between models and view models. In the "Stateful" scenario we make a copy of a model object and bind it to a view. Are there any downsides in the "Stateless" scenario? Because it seems like it's a way to go by default. What stops us from implementing INotifyDataErrorInfo, INPC and all the stuff at the level of models? | Yes, it is a code smell (in lots of cases). I think it is difficult to replace if-else with virtual methods in tools In your example, it is quite simple to replace the if/else by virtual methods: class Tool{
public:
virtual int GetAttack() const=0;
virtual int GetDefense() const=0;
};
class Sword : public Tool{
// ...
public:
virtual int GetAttack() const {return attack;}
virtual int GetDefense() const{return 0;}
}; Now there is no need any more for your if block, the caller can just use it like currentAttack+=tool->GetAttack();
currentDefense+=tool->GetDefense(); Of course, for more complicated situations, such a solution is not always so obvious (but nethertheless almost anytime possible). But if you come to a situation where you do not know how to resolve the case with virtual methods, you can ask a new question again here on "Programmers" (or, if it becomes language or implementation specific, on Stackoverflow). | {
"source": [
"https://softwareengineering.stackexchange.com/questions/321822",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/84754/"
]
} |
321,833 | A question that gets asked a lot is "Why use low level languages if you can code in high level languages more easily (and often tersely)?". I think the answers are fairly straight forward here, being mainly efficiency concerns. However, I pose "Why do we use high level languages in the first place?". Besides the fact that a higher level language is easier to code in and therefore less error prone, I would love to hear some opinions on why we use high level languages. Consider especially an example of someone who is being paid to both learn a language and then develop something in it. Here they would become equally proficient in whichever language chosen (say C vs. Python). As such, why would I not favor the efficiency and power of C in said example? | "Besides the fact that a higher level language is easier to code in and therefore less error prone" I really think this is a good enough reason all by itself. If you have no compelling reason to work in a low level of abstraction (such as performance, knowledge in the team, etc), then there is no reason to do it. If all you want is a coffee, then you want to tell the barista "I want a coffee", not "I want you to take three steps to the left, stretch out your arms, pick up the beans, put the in the grinder, push the button to grind them [...]" and so on. It wouldn't make the final product any better (in fact, in some cases it'd make it worse since the barista is probably way better than you at making coffee). High-level languages encourage you to think more about the problem domain and less about the execution platform. There is less ceremony, so you can spend more time on stuff that actually brings you value. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/321833",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/232853/"
]
} |
321,861 | Why do people prefer var is None over var == None when is can be used on few objects only? | The authoritative reason is "Because PEP-8 says so": Comparisons to singletons like None should always be done with is or is not , never the equality operators. Note: It's not merely "better practice", as equality vs. identity are distinct semantical constructs. In python , testing implicit 'truthiness' is generally preferred over comparison to an explicit value. Choose if somevar:
... over if somevar is True:
... over if somevar == True:
... Advantages emphasizing the explicit desire to compare identity to a builtin inability to break the comparison by defining __eq__ on arbitrary objects | {
"source": [
"https://softwareengineering.stackexchange.com/questions/321861",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/232883/"
]
} |
321,873 | I am planning on converting an old project originally from Objective-C to Swift. The project is currently under the MIT License, and any distributions of it must maintain the same MIT License. However, as far as I know, this only applies if the current code is used, but after I convert it to Swift, the whole syntax would change...Does that mean that I'm free on implementing any other license? | The authoritative reason is "Because PEP-8 says so": Comparisons to singletons like None should always be done with is or is not , never the equality operators. Note: It's not merely "better practice", as equality vs. identity are distinct semantical constructs. In python , testing implicit 'truthiness' is generally preferred over comparison to an explicit value. Choose if somevar:
... over if somevar is True:
... over if somevar == True:
... Advantages emphasizing the explicit desire to compare identity to a builtin inability to break the comparison by defining __eq__ on arbitrary objects | {
"source": [
"https://softwareengineering.stackexchange.com/questions/321873",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/232896/"
]
} |
322,170 | I've been working on a project for a couple years now, and I'm starting to gather a decent user base. I've created a project page with some basic documentation, but it's really not much more than a FAQ at this point. I know that I need to improve it so that it's more informative for both new and power users, and that's next on my to-do list for the next release. However, the next release has features that the user base is anxious to get. I'm prepared to release it right now, it's packaged and ready to go. I just need to deploy it to the appropriate distribution services. To the point. The features are important to my users, but the documentation is important to me. Should I wait to release until after I rewrite the documentation? My current user base is savvy enough to understand how to use the new features, so that's not what I'm worried about. It may take a couple weeks to finish the docs, as I have limited free time to work on this project, but the community would roast me on a spit if I made them wait any longer. Is the customer right in this scenario? Should a fantastic, straight-forward feature for existing users take priority over robust documentation for new users? Update: Wow, so many great, high-quality responses! You've really helped me get a better understanding of how I should be interacting and supporting both the project and its users. Thanks a million! | Simple: Release a beta version! Then when documentation is done, do final release of the new version. If you have users willing to try out the new stuff, then by all means take advantage of that. You will get bug reports, you probably get community questions about the difficult points so you know where to concentrate on the documentation, etc. You may also want to tweak some things based on user feedback, which may affect documentation. Basically, everybody wins. One reason to not do early release is, if you think your users will not be receptive of a "beta version", then you should think twice about doing it, but going by what you write, sounds like they would be happy about it. Another reason would be, if there are technical difficulties about doing a beta release using whatever release channels you use. Then it might be more hassle than it's worth to do separate beta and final releases. If you think your software is complete, then in this case I'd lean on early release, update documentation when it's done. Otherwise there's the risk that the documentation gets delayed, and then the whole release gets delayed or you end up releasing without final documentation anyway, so just do it now. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/322170",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/185519/"
]
} |
322,256 | I'm a solo developer with a pretty time-constrained work environment where development time ranges usually from 1-4 weeks per project, depending on either requirements, urgency, or both. At any given time I handle around 3-4 projects, some having timelines that overlap with each other. Expectedly, code quality suffers. I also do not have formal testing; it usually goes down to walking through the system until it somewhat breaks. As a result, a considerable amount of bugs escape to production, which I have to fix and in turn sets back my other projects. This is where unit testing comes in. When done right, it should keep bugs, let alone those that escape to production, to a minimum. On the other hand, writing tests can take a considerable amount of time, which doesn't sound good with time-constrained projects such as mine. Question is, how much of a time difference would writing unit-tested code over untested code, and how does that time difference scale as project scope widens? | The later you test, the more it costs to write tests. The longer a bug lives, the more expensive it is to fix. The law of diminishing returns ensures you can test yourself into oblivion trying to ensure there are no bugs. Buddha taught the wisdom of the middle path. Tests are good. There is such a thing as too much of a good thing. The key is being able to tell when you are out of balance. Every line of code you write without tests will have significantly greater costs to adding tests later than if you had written the tests before writing the code. Every line of code without tests will be significantly more difficult to debug or rewrite. Every test you write will take time. Every bug will take time to fix. The faithful will tell you not to write a single line of code without first writing a failing test. The test ensures you're getting the behavior you expect. It allows you to change the code quickly without worrying about affecting the rest of the system since the test proves the behavior is the same. You must weigh all that against the fact that tests don't add features. Production code adds features. And features are what pay the bills. Pragmatically speaking, I add all the tests I can get away with. I ignore comments in favor of watching tests. I don't even trust code to do what I think it does. I trust tests. But I've been known to throw the occasional hail mary and get lucky. However, many successful coders don't do TDD. That doesn't mean they don't test. They just don't obsessively insist that every line of code have an automated test against it. Even Uncle Bob admits he doesn't test his UI. He also insists you move all logic out of the UI . As a football metaphor (that's American football) TDD is a good ground game. Manual only testing where you write a pile of code and hope it works is a passing game. You can be good at either. Your career isn't going to make the playoffs unless you can do both. It won't make the superbowl until you learn when to pick each one. But if you need a nudge in a particular direction: the officials calls go against me more often when I'm passing. If you want to give TDD a try I highly recommend you practice before trying to do it at work. TDD done half way, half hearted, and half assed is a big reason some don't respect it. It's like pouring one glass of water into another. If you don't commit and do it quickly and completely you end up dribbling water all over the table. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/322256",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/208954/"
]
} |
322,271 | Not so long ago I talked to my colleague and he was definitely against using bit masks because it is hard to understand all the values that are stored in the database. In my opinion it is not always a bad idea to use them, for example to determine the roles of the current user. Otherwise you need to store it in a separate table, which will cause one more JOIN.
Can you please tell me if I am wrong? Any other side-effects, advantages/disadvantages of using bit masks? | I work with an application that uses bitmasks to store user role assignments. It's a pain in the butt. If this makes me biased, guilty as charged. If you're already using a relational database, it is an anti-pattern that violates most relational theory and all the normalization rules. When you build your own data storage, it may not be such a bad idea. There is such a thing as too many tables being joined, but relational databases are built to handle this. Many have additional features if performance becomes an issue: indexes, indexed views, etc. Even if the values you're looking up don't change very often, which is an advantage for Bitmask, the over-head of having to manage indexing is pretty easy on the database. Although database do a good job of aggregating data, they can get sluggish when you start introducing things like complex formulas or Scalar Functions into datasets. You can do the bitwise in your app, but if all you're doing is getting related data (looking up a user's role(s)), you're not taking advantage of what your data storage does best. My last argument against it would be simplicity for other developers. You have users, roles and assignments. It's a many-to-many relation set (because there's more than one relationship) that is so common, it should be easy to manage. It's just CRUD stuff. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/322271",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/233478/"
]
} |
322,345 | I'm working with a small team that creates a proprietary web application and UX isn't much of a priority since our own people will be the ones operating it, but we do try to make their jobs easier. Should I, as a developer, create a UI mockup before I start creating a new screen? Nothing too fancy, mostly the general layout in order to talk it over with colleagues and to have a reference model. I was comparing it to creating some UML diagrams before delving into writing code blindly. One of my coworkers says this is preposterous and isn't my job to do that. | I very often work in such projects, and the answer is a resounding YES, and as early as possible. People find it much easier to criticize improve some draft than to come up with a solution from scratch. So I start drafting early for two reasons: Give the matter experts an impression on how the information could be presented. Show my current understanding of the problem and informational structures. In rare cases it was also nice to have some proof that I've actually delivered what we agreed on... | {
"source": [
"https://softwareengineering.stackexchange.com/questions/322345",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/224584/"
]
} |
322,363 | I wouldn't call myself a superstar dev, but a relatively experienced one. I try to keep code quality to a high level, and am always looking to make improvements to my coding style, try to make code efficient, readable and consistent as well as encouraging the team to follow a patterns & methodologies to ensure consistency. I also understand the necessity of balance between both quality and speed. In order to achieve this, I have introduced to my team the concept of peer review. Two thumbs up in github pull-request for a merge. Great - but not in my opinion without hiccoughs. I often see peer review comments from the same colleagues like - Would be good to add a space after <INSERT SOMETHING HERE> Unwanted extra line between methods Full stop should be used at the end of comments in docblocks. Now from my perspective - the reviewer is superficially looking at the code aesthetics - and is not really performing a code review. The cosmetic code review comes across to me as arrogant/elitist mentality. It lacks substance, but you can't really argue too much with it because the reviewer is technically correct . I would much rather see less of the above kinds of reviews, and more reviews as follows: You can reduce cyclomatic complexity by... Exit early and avoid if/else Abstract your DB query to a repository This logic doesn't really belong here Dont repeat yourself - abstract and reuse What would happen if X was passed as an argument to method Y ? Where is the unit test for this? I find that it is always the same kinds of people who give the cosmetic types of reviews, and the same types of people who in my opinion give the "Quality & Logic based" peer reviews. What (if any) is the correct approach to peer review. And am I correct in being frustrated with the same people basically skimming over code looking for spelling errors & aesthetic defects rather than actual code defects? If I am correct - how would I go about encouraging colleagues to actually look for faults in the code in balance with suggesting cosmetic touch-ups? If I am incorrect - please enlighten me. Are there any rules of thumb for what actually constitutes a good code review? Have I missed the point of what code reviews are? From my perspective - code review is about shared responsibility for the code. I wouldn't feel comfortable giving the thumbs-up to code without addressing/checking logic, readability and functionality. I also wouldn't bother blocking a merge for a solid piece of code if I noticed somebody had omitted a full stop in a doc-block. When I review code, I spend maybe between 15-45minutes per 500 Loc. I can't imagine these shallow reviews taking longer than 10 minutes ever if that's the depth of review they are performing. Further, how much value is the thumb-up from the shallow reviewer? Surely this means that all thumbs are not of equal weight and there needs to be maybe a 2-pass review process. One thumb for deep reviews and a 2nd thumb for the "polishing"? | Types of reviews There is no one true way to do peer reviews. There are many ways in which to judge whether code is of a sufficiently high quality. Clearly there is the question of whether it's buggy, or whether it has solutions that don't scale or which are brittle. Issues of conformance to local standards and guidelines, while perhaps not as critical as some of the others, is also part of what contributes to high quality code. Types of reviewers Just as we have different criteria for judging software, the people doing the judging are also different. We all have our own skills and predilections. Some may think that adhering to local standards is highly important, just as others might be more concerned with memory usage, or code coverage of your tests, and so on. You want all of these types of reviews, because as a whole they will help you write better code. A peer review is collaboration, not a game of tag I'm not sure you have the right to tell them how to do their job. Unless you know otherwise with certainty, assume that this person is trying to contribute the way he or she sees fit. However, if you see room for improvement, or suspect maybe they don't understand what is expected in a peer review, talk to them . The point of a peer review is to involve your peers . Involvement isn't throwing code over a wall and waiting for a response to be thrown back. Involvement is working together to make better code. Engage in a conversation with them. Advice Towards the end of your question you wrote: how would I go about encouraging colleagues to actually look for
faults in the code in balance with glaring aesthetic errors? Again, the answer is communication. Perhaps you can ask them "hey, I appreciate you catching these mistakes. It would help me tremendously if you could also focus on some deeper issues such as whether I'm structuring my code properly. I know it takes time, but it would really help." On a more pragmatic note, I personally divide code review comments into two camps and phrase them appropriately: things that must be fixed, and things that are more cosmetic. I would never prevent solid, working code from being checked in if there were too many blank lines at the end of a file. I will point it out, however, but I'll do so with something like "our guidelines say to have a single blank line at the end, and you have 20. It's not a show-stopper, but if you get a chance you might want to fix it". Here's something else to consider: it may be a pet peeve of yours that they do such a shallow review of your code. It may very well be that a pet peeve of theirs is that you (or some other teammate who gets a similar review) are sloppy with respect to your own organization's coding standards, and this is how they have chosen to communicate that with you. What to do after the review And lastly, a bit of advice after the review: When committing code after a review, you might want to consider taking care of all the cosmetic things in one commit, and the functional changes in another. Mixing the two can make it hard to differentiate significant changes from insignificant ones. Make all of the cosmetic changes and then commit with a message like "cosmetic; no functional changes". | {
"source": [
"https://softwareengineering.stackexchange.com/questions/322363",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/111862/"
]
} |
322,444 | Java has an automatic GC that once in a while Stops The World, but takes care of garbage on a heap. Now C/C++ applications don't have these STW freezes, their memory usage doesn't grow infinitely either. How is this behavior achieved? How are the dead objects taken care of? | The programmer is responsible for ensuring that objects they created via new are deleted via delete . If an object is created, but not destroyed before the last pointer or reference to it goes out of scope, it falls through the cracks and becomes a Memory Leak . Unfortunately for C, C++ and other languages which do not include a GC, this simply piles up over time. It can cause an application or the system to run out of memory and be unable to allocate new blocks of memory. At this point, the user must resort to ending the application so that the Operating System can reclaim that used memory. As far as mitigating this problem, there are several things that make a programmer's life much easier. These are primarily supported by the nature of scope . int main()
{
int* variableThatIsAPointer = new int;
int variableInt = 0;
delete variableThatIsAPointer;
} Here, we created two variables. They exist in Block Scope , as defined by the {} curly braces. When execution moves out of this scope, these objects will be automatically deleted. In this case, variableThatIsAPointer , as its name implies, is a pointer to an object in memory. When it goes out of scope, the pointer is deleted, but the object it points to remains. Here, we delete this object before it goes out of scope to ensure that there is no memory leak. However we could have also passed this pointer elsewhere and expected it to be deleted later on. This nature of scope extends to classes: class Foo
{
public:
int bar; // Will be deleted when Foo is deleted
int* otherBar; // Still need to call delete
} Here, the same principle applies. We don't have to worry about bar when Foo is deleted. However for otherBar , only the pointer is deleted. If otherBar is the only valid pointer to whatever object it points to, we should probably delete it in Foo 's destructor. This is the driving concept behind RAII resource allocation (acquisition) is done during object creation (specifically initialization), by the constructor, while resource deallocation (release) is done during object destruction (specifically finalization), by the destructor. Thus the resource is guaranteed to be held between when initialization finishes and finalization starts (holding the resources is a class invariant), and to be held only when the object is alive. Thus if there are no object leaks, there are no resource leaks. RAII is also the typical driving force behind Smart Pointers . In the C++ Standard Library, these are std::shared_ptr , std::unique_ptr , and std::weak_ptr ; although I have seen and used other shared_ptr / weak_ptr implementations that follow the same concepts. For these, a reference counter tracks how many pointers there are to a given object, and automatically delete s the object once there are no more references to it. Beyond that, it all comes down to proper practices and discipline for a programmer to ensure that their code handles objects properly. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/322444",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/233699/"
]
} |
322,450 | The AngularJS documentation mentions that it considers one-way data binding to be 'bad' and two-way data bind to be 'good*'. However, on looking over the definitions of two-way data binding used, all it seems to be is an auto-sync put in place between the view and the model. Diagram Surely this has a performance hit and uses additional resources. Or am I missing something? Is two-way data binding something more than an MVC auto-sync, and are there any aspects of data binding theory which would cause two-way data binding to always be preferable, as seems to be implied in the documentation and diagrams? *it does this with sad and smiley faces instead of words, as can be seen above... | The programmer is responsible for ensuring that objects they created via new are deleted via delete . If an object is created, but not destroyed before the last pointer or reference to it goes out of scope, it falls through the cracks and becomes a Memory Leak . Unfortunately for C, C++ and other languages which do not include a GC, this simply piles up over time. It can cause an application or the system to run out of memory and be unable to allocate new blocks of memory. At this point, the user must resort to ending the application so that the Operating System can reclaim that used memory. As far as mitigating this problem, there are several things that make a programmer's life much easier. These are primarily supported by the nature of scope . int main()
{
int* variableThatIsAPointer = new int;
int variableInt = 0;
delete variableThatIsAPointer;
} Here, we created two variables. They exist in Block Scope , as defined by the {} curly braces. When execution moves out of this scope, these objects will be automatically deleted. In this case, variableThatIsAPointer , as its name implies, is a pointer to an object in memory. When it goes out of scope, the pointer is deleted, but the object it points to remains. Here, we delete this object before it goes out of scope to ensure that there is no memory leak. However we could have also passed this pointer elsewhere and expected it to be deleted later on. This nature of scope extends to classes: class Foo
{
public:
int bar; // Will be deleted when Foo is deleted
int* otherBar; // Still need to call delete
} Here, the same principle applies. We don't have to worry about bar when Foo is deleted. However for otherBar , only the pointer is deleted. If otherBar is the only valid pointer to whatever object it points to, we should probably delete it in Foo 's destructor. This is the driving concept behind RAII resource allocation (acquisition) is done during object creation (specifically initialization), by the constructor, while resource deallocation (release) is done during object destruction (specifically finalization), by the destructor. Thus the resource is guaranteed to be held between when initialization finishes and finalization starts (holding the resources is a class invariant), and to be held only when the object is alive. Thus if there are no object leaks, there are no resource leaks. RAII is also the typical driving force behind Smart Pointers . In the C++ Standard Library, these are std::shared_ptr , std::unique_ptr , and std::weak_ptr ; although I have seen and used other shared_ptr / weak_ptr implementations that follow the same concepts. For these, a reference counter tracks how many pointers there are to a given object, and automatically delete s the object once there are no more references to it. Beyond that, it all comes down to proper practices and discipline for a programmer to ensure that their code handles objects properly. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/322450",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/233703/"
]
} |
322,494 | If I remember my compilers course correctly, the typical compiler has the following simplified outline: A lexical analyzer scans (or calls some scanning function on) the source code character-by-character The string of input characters is checked against the dictionary of lexemes for validity If the lexeme is valid, it is then classified as the token that it corresponds to The parser validates the syntax of the combination of tokens; token-by-token . Is it theoretically feasible to split the source code into quarters (or whatever denominator) and multithread the scanning and parsing process? Do compilers exist that utilize multithreading? | Large software projects are usually composed of many compilation units that can be compiled relatively independently, and so compilation is often parallelized at a very rough granularity by invoking the compiler several times in parallel. This happens at the level of OS processes and is coordinated by the build system rather than the compiler proper. I realize this isn't what you asked but that's the closest thing to parallelization in most compilers. Why is that? Well, much of the work that compilers do doesn't lend itself to parallelization easily: You can't just split the input into several chunks and lex them independently. For simplicity you'd want to split on lexme boundaries (so that no thread starts in the middle of a lexme), but determining lexme boundaries potentially requires a lot of context. For example, when you jump in the middle of the file, you have to make sure you didn't jump into a string literal. But to check this, you have to look at basically every character that came before, which is almost as much work as simply lexing it to begin with. Besides, lexing is rarely the bottleneck in compilers for modern languages. Parsing is even harder to parallelize. All the problems of splitting the input text for lexing apply even more to splitting the tokens for parsing --- e.g., determining where a function starts is basically as hard as parsing the function contents to begin with. While there might also be ways around this, they will probably be disproportionally complex for the little benefit. Parsing, too, is not the largest bottleneck. After you've parsed, you usually need to perform name resolution, but this leads to a huge interwoven net of relationships. To resolve a method call here you might have to first resolve the imports in this module, but those require resolving the names in another compilation unit, etc. Same for type inference if your language has that. After this, it gets slightly easier. Type checking and optimization and code generation might, in principle, be parallelized at function granularity. I still know of few if any compilers doing this, perhaps because doing any task this large concurrently is quite challenging. You also have to consider that most large software projects contain so many compilation units that the "run a bunch of compilers in parallel " approach is entirely sufficient to keep all your cores occupied (and in some cases, even an entire server farm). Plus, in large compilation tasks the disk I/O can be as much of a bottleneck as the actual work of compiling. All that said, I do know of a compiler that parallelizes the work of code generation and optimization. The Rust compiler can split the back end work (LLVM, which actually includes code optimizations that are traditionally considered "middle-end") among several threads. This is called "code-gen units". In contrast with the other parallelization possibilities discussed above, this is economical because: The language has rather large compilation units (compared to, say, C or Java), so there might be fewer compilation units in flight than you have cores. The part that is being parallelized usually takes the vast majority of compile time. The backend work is, for the most part, embarrassingly parallel — just optimize and translate to machine code each function independently. There are inter-procedural optimizations of course, and codegen units do hinder those and thus impact performance, but there aren't any semantic problems. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/322494",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/218080/"
]
} |
322,682 | I was assigned to a new project recently. Well, an old project actually, written in classic ASP. Now a new version of the application is being written in the latest ASP.NET, but it's not expected to be RTM in a while (estimated release date is January 2017) so I have to perform some maintenance on the old application until it can be discarded. Also, I've got a feeling that not all customers will be switching over to the new program immediately, so this version will probably be around for a while. And the problem is, it's full of errors. Parts of it date back to the previous century, when there were no web standards, and I don't really mind about Quirks mode, and width and height attributes instead of CSS, tables used for layout, framesets etc, but oh, all those errors! width="20px" all over the place, onchange="javascript:..." , and in those places where they do use css, style="width:20" and style="width=20px" are commonplace. Not to mention plenty of lines where there are contradictory width and style attributes. Etc etc. As a result, the web application only runs under IE, and only in compatibility mode. It's clear that the developers never looked at code validity, only if what came out looked like what they had in mind it should look like. And I don't know how to handle that. I find it impossible to close my eyes to those errors while looking in the code for other errors. I can of course do a global find and replace to get most of the issues out of the way, but that would mean my first commit would consist of thousands of changed .asp files. Can I do that? | It sounds like you are confusing several things into the term "errors" legacy html attributes coding style coding errors which don't cause bugs unreported bugs mistakes which are now features reported bugs reported bugs you have been assigned to fix On a legacy app which is going to be replaced only one of these types of error should concern you. The last one. I would go as far as to say you shouldn't even refactor other stuff on a feature you are bug fixing, mainly due to : mistakes which are now features You can see from the code how it was maybe meant to work but never did, but all the users have been getting along with the indeterminately widthed element for the past 10 years and they wont thank you for fixing it. On the plus side, if you put your cynical JFDI head on, you will be able to burn though the bugs super quick and the new version team wont be able to keep up with the old versions features. This will give you a wry gloating smile of ironic glee as you recommend a chrome plugin ie6 emulator to clients so they can keep using the marquee 'feature' they love | {
"source": [
"https://softwareengineering.stackexchange.com/questions/322682",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/44898/"
]
} |
322,714 | Most architectures I've seen rely on a call stack to save/restore context before function calls. It's such a common paradigm that push and pop operations are built-in to most processors. Are there systems that work without a stack? If so, how do they work, and what are they used for? | A (somewhat) popular alternative to a call stack are continuations . The Parrot VM is continuation-based, for example. It is completely stackless: data is kept in registers (like Dalvik or the LuaVM, Parrot is register-based), and control-flow is represented with continuations (unlike Dalvik or the LuaVM, which have a call stack). Another popular data structure, used typically by Smalltalk and Lisp VMs is the spaghetti stack, which is kind-of like a network of stacks. As @rwong pointed out , continuation-passing style is an alternative to a call stack. Programs written in (or transformed to) continuation-passing style never return, so there is no need for a stack. Answering your question from a different perspective: it is possible to have a call stack without having a separate stack, by allocating the stack frames on the heap. Some Lisp and Scheme implementations do this. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/322714",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/66745/"
]
} |
322,715 | Are events only used for GUI programming? How do you handle in normal backend programming when something happens to this other thing? | Nope. They're really handy for implementing Observers and making sure that classes are closed to modification. Let's say we have a method that registers new users. public void Register(user) {
db.Save(user);
} Then someone decides that an email should be sent. We could do this: public void Register(user) {
db.Save(user);
emailClient.Send(new RegistrationEmail(user));
} But we've just modified a class that's supposed to be closed to modification. Probably fine for this simple pseudo-code, but likely the way to madness in production code. How long until this method is 30 lines of code that's barely related to the original purpose of creating a new user?? It's much nicer to let the class perform its core functionality and to raise an event telling whoever's listening that a user was registered, and they can take whatever action they need to take (such as send an email). public void Register(user) {
db.Save(user);
RaiseUserRegisteredEvent(user);
} This keeps our code clean and flexible. One of the often overlooked pieces of OOP is that classes send messages to each other. Events are these messages. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/322715",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/234089/"
]
} |
322,716 | In my unit tests, I often throw arbitrary values at my code to see what it does. For example, if I know that foo(1, 2, 3) is supposed to return 17, I might write this: assertEqual(foo(1, 2, 3), 17) These numbers are purely arbitrary and have no broader meaning (they are not, for example, boundary conditions, though I do test on those as well). I would struggle to come up with good names for these numbers, and writing something like const int TWO = 2; is obviously unhelpful. Is it OK to write the tests like this, or should I factor the numbers out into constants? In Are all magic numbers created the same? , we learned that magic numbers are OK if the meaning is obvious from context, but in this case the numbers actually have no meaning at all. | When do you really have numbers which have no meaning at all? Usually, when the numbers have any meaning, you should assign them to local variables of the test method to make the code more readable and self-explaining. The names of the variables should at least reflect what the variable means, not necessarily its value. Example: const int startBalance = 10000;
const float interestRate = 0.05f;
const int years = 5;
const int expectedEndBalance = 12840;
assertEqual(calculateCompoundInterest(startBalance, interestRate, years),
expectedEndBalance); Note that the first variable is not named HUNDRED_DOLLARS_ZERO_CENT , but startBalance to denote what's the meaning of the variable but not that its value is in any way special. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/322716",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/153279/"
]
} |
322,909 | Consider a function like this: function savePeople(dataStore, people) {
people.forEach(person => dataStore.savePerson(person));
} It might be used like this: myDataStore = new Store('some connection string', 'password');
myPeople = ['Joe', 'Maggie', 'John'];
savePeople(myDataStore, myPeople); Let us assume that Store has its own unit tests, or is vendor-provided. In any case, we trust Store . And let us further assume that error handling -- eg, of database disconnection errors -- is not the responsibility of savePeople . Indeed, let us assume that the store itself is magical database that cannot possibly error in any way. Given these assumptions, the question is: Should savePeople() be unit tested, or would such tests amount to testing the built-in forEach language construct? We could, of course, pass in a mock dataStore and assert that dataStore.savePerson() is called once for each person. You could certainly make the argument that such a test provides security against implementation changes: eg, if we decided to replace forEach with a traditional for loop, or some other method of iteration. So the test is not entirely trivial. And yet it seems awfully close... Here's another example that may be more fruitful. Consider a function that does nothing but coordinate other objects or functions. For example: function bakeCookies(dough, pan, oven) {
panWithRawCookies = pan.add(dough);
oven.addPan(panWithRawCookies);
oven.bakeCookies();
oven.removePan();
} How should a function like this be unit tested, assuming you think it should? It's hard for me to imagine any kind of unit test that doesn't simply mock dough , pan , and oven , and then assert that methods are called on them. But such a test is doing nothing more than duplicating the exact implementation of the function. Does this inability to test the function in a meaningful black box way indicate a design flaw with the function itself? If so, how could it be improved? To give even more clarity to the question motivating the bakeCookies example, I'll add a more realistic scenario, which is one I've encountered when attempting to add tests to and refactor legacy code. When a user creates a new account, a number of things need to happen: 1) a new user record needs to be created in the database 2) a welcome email needs to be sent 3) the user's IP address needs to be recorded for fraud purposes. So we want to create a method that ties together all the "new user" steps: function createNewUser(validatedUserData, emailService, dataStore) {
userId = dataStore.insertUserRecord(validateduserData);
emailService.sendWelcomeEmail(validatedUserData);
dataStore.recordIpAddress(userId, validatedUserData.ip);
} Note that if any of these methods throws an error, we want the error to bubble up to the calling code, so that it can handle the error as it sees fit. If it's being called by the API code, it may translate the error into an appropriate http response code. If it's being called by a web interface, it may translate the error into an appropriate message to be displayed to the user, and so on. The point is this function doesn't know how to handle the errors that may be thrown. The essence of my confusion is that to unit test such a function it seems necessary to repeat the exact implementation in the test itself (by specifying that methods are called on mocks in a certain order) and that seems wrong. | Should savePeople() be unit tested? Yes. You aren't testing that dataStore.savePerson works, or that the db connection works, or even that the foreach works. You are testing that savePeople fulfills the promise it makes through its contract. Imagine this scenario: someone does a big refactor of the code base, and accidentally removes the forEach part of the implementation so that it always only saves the first item. Wouldn't you want a unit test to catch that? | {
"source": [
"https://softwareengineering.stackexchange.com/questions/322909",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/21669/"
]
} |
322,924 | The idea is inspired by the fact operators such as +, -,%, etc. can be seen as functions with either one or two arguments passed, and no side-effects. Assuming I, or someone else, writes a language which stops more than two arguments from being passed, and also only works via return value: a) would such a language lead to easier to understand code? b) would the flow of the code be clearer? (forced into more steps, with potentially less interactions 'hidden' c) would the restrictions make the language inordinately bulky for more complex programs. d)(bonus) any other comments on pros/cons Note: Two decisions would still have to be made- the first is whether to allow user-input outside main() or its equivalent, and also what the rule will be regarding what happens when passing arrays/structures. For an example, if someone wants a single function to add multiple values, he could get around the limitation by bundling it into an array. This could be stopped by not allowing an array or struct from interacting with itself, which would still allow you to, for example, divide each number by a different amount, depending on it's position. | Robert C. Martin in his book "Clean Code" recommends heavily the use of functions with 0, 1 or 2 parameters at maximum, so at least there is one experienced book author who thinks code becomes cleaner by using this style (however, he is surely not the ultimative authority here, and his opinions are debatable). Where Bob Martin is IMHO correct is: functions with 3 or more parameters are often indicators for a code smell. In lots of cases, the parameters might be grouped together to form a combined datatype, in other cases, it can be an indicator for the function simply doing too much. However, I do not think it would be a good idea to invent a new language for this: if you really want to enforce such a rule throughout your code, you just need a code analysis tool for an existing language, no need to invent a completely new language for this (for example, for C# something like 'fxcop' could probably be utilized). sometimes, combining parameters to a new type just does not seem worth the hassle, or it would become a pure artificial combination. See, for example, this File.Open method from the .Net framework. It takes four parameters, and I am pretty sure the designers of that API did this intentionally, because they thought that would be the most practical way to provide the diffferent parameters to the function. there are sometimes real world scenarios where more than 2 parameters make things simpler for technical reasons (for example, when you need a 1:1 mapping to an existing API where you are bound to the usage of simple datatypes, and can't combine different parameters into one custom object) | {
"source": [
"https://softwareengineering.stackexchange.com/questions/322924",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/-1/"
]
} |
322,925 | So my problem is, I have users entering their data(names, address) in a mixed form- some with first letter in caps and some without, some inserting more than necessary spaces and so forth. Also, since there are some with titles in their names(happens in some parts of Europe), the title words have to be in small.
Now the obvious thing to do is remove spaces and capitalize all the data in the model and then insert/update to the db. My question: Is this the right thing to do? Or should I store the data as it was entered and only make it look right with view side functionality?
what do large systems like ERP and CRM systems do? | Robert C. Martin in his book "Clean Code" recommends heavily the use of functions with 0, 1 or 2 parameters at maximum, so at least there is one experienced book author who thinks code becomes cleaner by using this style (however, he is surely not the ultimative authority here, and his opinions are debatable). Where Bob Martin is IMHO correct is: functions with 3 or more parameters are often indicators for a code smell. In lots of cases, the parameters might be grouped together to form a combined datatype, in other cases, it can be an indicator for the function simply doing too much. However, I do not think it would be a good idea to invent a new language for this: if you really want to enforce such a rule throughout your code, you just need a code analysis tool for an existing language, no need to invent a completely new language for this (for example, for C# something like 'fxcop' could probably be utilized). sometimes, combining parameters to a new type just does not seem worth the hassle, or it would become a pure artificial combination. See, for example, this File.Open method from the .Net framework. It takes four parameters, and I am pretty sure the designers of that API did this intentionally, because they thought that would be the most practical way to provide the diffferent parameters to the function. there are sometimes real world scenarios where more than 2 parameters make things simpler for technical reasons (for example, when you need a 1:1 mapping to an existing API where you are bound to the usage of simple datatypes, and can't combine different parameters into one custom object) | {
"source": [
"https://softwareengineering.stackexchange.com/questions/322925",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/234408/"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.