source_id
int64
1
4.64M
question
stringlengths
0
28.4k
response
stringlengths
0
28.8k
metadata
dict
322,951
I am developing a simple RESTful service for tournaments and schedules. When a tournament is created through a POST request containing a JSON body, the tournament is inserted in a BiMap , declared as follows in a DAO implementation: private BiMap<String, Tournament> tournaments = Maps.synchronizedBiMap(HashBiMap.create()); When a tournament is created, its associated string id is returned so the user can have future reference of that tournament. He/she can get information back from the new tournament performing the following request: GET http://localhost:8080/eventscheduler/c15268ce-474a-49bd-a623-b0b865386f39 But what if no tournament with such id is found? So far, I am returning a 204 response. Well, Jersey is doing it for me when returning null from one of its methods. This is the method that corresponds to the route above: @Path("/{id}") @GET @Produces(MediaType.APPLICATION_JSON) public Tournament getTournament(@PathParam("id") String id) { Optional<Tournament> optTournament = tournamentDao.getTournament(id); if (optTournament.isPresent()) return optTournament.get(); return null; } My question is: is it OK to return a 204: No Content response, or should it be a 404 response instead, since the resource was not found? If I should change it to a 404, obvious question: I should change the method signature right? Since now a tournament (of type Tournament ) might not be returned, the method should look different. Should I use the Response type as the return type instead?
HTTP 204 means that something was found, but it's empty. For instance, imagine that you're serving log files through HTTP, with the requests such as http://example.com/logs/[date-goes-here] . On May 18th, 2015: http://example.com/logs/2015-05-19 would return HTTP 404 , which means that there are no logs, because, well, it's difficult to log the future. http://example.com/logs/2015-05-18 , however, would return either HTTP 200 with the log entries in the content of the response, or HTTP 204 if the log file was created, but there are still no logs recorded yet for this date. If you provide null to the framework as a response to a request, it assumes that you found an entry, and this entry is empty, thus HTTP 204 . Instead, you should throw new NotFoundException(); to indicate to the framework that the entry doesn't exist, so that it would generate a HTTP 404 . If I should change it to a 404, obvious question: I should change the method signature right? No, you don't. That's the nice thing about throw new NotFoundException(); . It will work no matter what is the actual return type of your method.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/322951", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/223143/" ] }
323,054
I use @SuppressWarnings("unchecked") and @SuppressWarnings("null") mostly above methods to let the code compile without any warnings but I have my doubts. Found this Stackoverflow question . Jon Skeet wrote an answer to it which I find intriguing. According to him, Sometimes Java generics just doesn't let you do what you want to, and you need to effectively tell the compiler that what you're doing really will be legal at execution time. But what if there is a chance that an exception will be thrown? Isn't suppressing warnings a bad idea then? Shouldn't I be aware of the places where problems could surface? Also, what if someone else modifies my code later and adds some questionable functionality without removing SuppressWarnings? How can that be avoided and/or is there any other alternative to this? Should I be using @SuppressWarnings("unchecked") and @SuppressWarnings("null") ? Update #1 As far as unchecked type casts go, according to this answer (pointed out by @gnat in the comments below), suppressing these warnings is necessary. Many indispensible Java libraries have never been updated to eliminate the need for unsafe typecasts. Suppressing those warnings is necessary so that other more important warnings will be noticed and corrected. In case of suppressing other warnings, still in a bit of a grey area. Update #2 As per Oracle Docs (also mentioned by some answers below): As a matter of style, programmers should always use this annotation on the most deeply nested element where it is effective. If you want to suppress a warning in a particular method, you should annotate that method rather than its class.
To me, the entire point of suppressing warnings is to maintain a "clean bill of health" for your project. If you know that your entire code base compiles cleanly, it's immediately obvious when someone does something wrong that causes the first warning to appear in the issues list. You can then fix the error or suppress it if you can prove that it's spurious. But if you have 21 warnings in there to begin with, it's much more likely that you'll overlook the 22nd one when someone causes it, and you don't check that it's harmless. That means that problems can creep into your code base and you'll never notice. Warnings are useful items of information. Make sure you heed the ones that speak the truth, and filter away the ones that don't. Don't let people commingle the two kinds so that you lose your early warning system. Edit I should probably clarify that suppressing a warning that does have merit is a silly thing to do. A clean bill of health that you obtained by cheating is obviously worth nothing. Given the choice, you should always fix the problem the compiler noticed rather than just close your eyes to it. However, there are areas in which the compiler cannot be sure whether something will be a problem or not (Java's generics are one such area), and there the better choice is to review each such instance and then suppress the warning in this specific place rather than to switch off this class of warning altogether and potentially miss a genuine one.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/323054", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/226181/" ] }
323,130
Suppose, I have class Boss and Worker; Boss has a Worker and Worker holds a reference of Boss: Boss.h #include "Worker.h" class Boss{ public: Worker worker; }; Worker.h class Boss; class Worker{ Boss* boss; }; In C++, Worker doesn't need Boss to compile, but I am not comfortable to see the word "Boss" appear in Worker and the word "Worker" appears in a "Boss" at the same time. Also, if this design moves to other language that doesn't have systems like .h and .cpp (e.g.:Java), it becomes dependent on each other. So, my problem is, if the C++ code has such pattern, even Worker doesn't require Boss to compile, is it still a flawed design need to be fixed? If so, how can I fix it?
There is nothing which is fundamentally flawed about this idea. What you have is two relationships. Boss owns one or more Worker s. And Worker has a non-owning reference to a Boss . The use of a raw pointer suggests that Worker does not own the pointer it stores; it's merely using it. This means that it does not control that object's lifetime. There is nothing wrong with such a relationship per-se. It all depends on how it gets used. For example, if the Boss reference that Worker stores is the actual Boss object instance that owns the Worker , then everything is fine. Why? Because presumably, a Worker instance cannot exist without a Boss who owns it. And because the Boss owns it, this guarantees that the Worker will not outlive the Boss it references. Well... kinda. And this is where you start getting into potential problems. First, by the rules of C++ construction, Boss::worker is constructed before the Boss instance itself. Now, Boss 's constructor can pass a this pointer to Worker 's constructor. But Worker cannot use it , since the Boss object has not yet been constructed. Worker can store it, but it can't access anything in it. Similarly, by the rules of C++ destruction, Boss::worker will be destroyed after the owning Boss instance. So Worker 's destructor cannot safely use the Boss pointer, since it points to an object whose lifetime has ended. These limitations can sometime lead to having to use two-stage construction. That is, calling Worker back after Boss has been fully constructed, so that it can communicate with it during construction. But even this may be OK, particularly if Worker doesn't need to talk to Boss in its constructor. Copying also becomes a problem. Or more to the point, the assignment operators become problematic. If Worker::boss is intended to point back to the specific Boss instance that owns it, then you must never copy it. Indeed, if that's the case, you should declare Worker::boss as a constant pointer, so that the pointer gets set in the constructor and nowhere else. The point of this example is that the idea you've defined is not, by itself, unreasonable. It has a well-defined meaning and you can use it for many things. You just have to think about what you're doing.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/323130", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/196142/" ] }
323,146
This is a general question on a subject I've found interesting as a gamer: CPU/GPU bottlenecks and programming. If I'm not mistaken, I've come to understand that both CPU and GPU calculate stuff, but that one is better in some calculations than the other due to the difference in architecture. For example, cracking hashes or cryptocurrency mining seems way more efficient on GPUs than on CPUs. So I've wondered: is having a GPU at 100% load while the CPU is at 50% (for example) inevitable? Or, more precisely: Can some calculations that are normally done by the GPU be done by the CPU if the first one is at 100% load, so that both reach a 100% load? I've searched a bit about the subject, but have come back quite empty-handed. I think and hope this has its place in this subsection and am open to any documentation or lecture you might give me!
Theoretically yes, but practically it's rarely worth it. Both CPUs and GPUs are turing-complete , so any algorithm which can be calculated by one can also be calculated by the other. The question is how fast and how convenient. While the GPU excels at doing the same simple calculations on many data-points of a large dataset, the CPU is better at more complex algorithms with lots of branching. With most problems the performance difference between CPU and GPU implementations is huge. That means using one to take work from the other when it is stalling would not really lead to a notable increase in performance. However, the price you have to pay for this is that you need to program everything twice, once for the CPU and once for the GPU. That's more than twice as much work because you will also have to implement the switching and synchronization logic. That logic is extremely difficult to test, because its behavior depends on the current load. Expect very obscure and impossible to reproduce bugs from this stunt.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/323146", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/234688/" ] }
323,336
I'm looking to implement dependency injection in a relatively large application but have no experience in it. I studied the concept and a few implementations of IoC and dependency injectors available, like Unity and Ninject. However, there is one thing which is eluding me. How should I organize instance creation in my application? What I'm thinking about is that I can create a few specific factories which will contain logic of creating objects for a few specific class types. Basically a static class with a method invoking Ninject Get() method of a static kernel instance in this class. Will it be a correct approach of implementing dependency injection in my application or should I implement it according to some other principle?
Don't think yet about the tool that you are going to use. You can do DI without an IoC Container. First point: Mark Seemann has a very good book about DI in .Net Second: composition root. Make sure that the whole set up is done on the entry point of the project. Rest of your code should know about injections, not about any tool that is being used. Third: Constructor Injection is the most likely way to go (there are cases in which you wouldn't want it, but not that many). Fourth: look into using lambda factories and other similar features to avoid creating unneeded interfaces/classes for the sole purpose of injection.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/323336", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/142717/" ] }
323,477
OK so a lot of code review is fairly routine. But occasionally there are changes that broadly impact existing complex, fragile code. In this situation, the amount of time it would take to verify the safety of the changes, absence of regression, etc. is excessive. Perhaps even exceeding the time it took to do the development itself. What to do in this situation? Merge and hope nothing slips through? (Not advocating that!) Do the best one can and try only to spot any obvious flaws (perhaps this is the most code review should aim for anyway?) Merge and test extra-thoroughly as a better alternative than code review at all? This is not specifically a question whether testing should be done as part of a code review. This is a question asking what the best options are in the situation as described, especially with a pressing deadline, no comprehensive suite of unit tests available or unit tests not viable for the fragmented code that's changed. EDIT: I get the impression that a few of the answers/comments so far have picked up on my phrase "broadly impact", and possibly taken that to mean that the change involved a large number of lines of code. I can understand this being the interpretation, but that wasn't really my intention. By "broadly impact", I mean for example the potential for regression is high because of the interconnectedness of the codebase, or the scope of knock-on effects, not necessarily that the change itself is a large one. For example, a developer might find a way to fix a bug with a single line by calling an existing high level routine that cascades calls to many lower level routines. Testing and verifying that the bug fix worked is easy. Manually validating (via code review) the impact of all the knock-on effects is much more difficult.
The premise of the question is, frankly, astounding. We suppose that there is a large change to fragile, complex code, and that there is simply not enough time to review it properly . This is the very last code you should be spending less time on reviewing! This question indicates that you have structural problems not only in your code itself, but in your methodology of managing change. So how to deal with this situation? Start by not getting into it in the first place: Identify sources of complexity, and apply careful, heavily reviewed, correct refactorings to increase the level of abstraction. The code should be understandable by a fresh-out-of-college new employee who knows something about your business domain. Identify sources of fragility; this could be by review of the code itself, by examining the history of bug fixes to the code, and so on. Determine which subsystems are fragile and make them more robust . Add debugging logic. Add assertions. Create a slow but obviously correct implementation of the same algorithm and in your debug build, run both and verify that they agree. In your debug build, cause rare situations to occur more frequently. (For example, make a memory allocator that always moves a block on reallocation, or always allocates a block at the end of a page, or whatever.) Make the code robust in the face of changes to its context. Now you don't have fragile code anymore; now you have code that finds the bugs, rather than causes the bugs. Write a suite of automated tests. Obviously. Don't make broad changes. Make a series of small, targeted changes, each of which can be seen to be correct. But fundamentally, your scenario is "we have dug ourselves into a hole of technical debt and each complex, unreviewed change digs us deeper; what should we do?". What do you do when you find yourself in that hole? Stop digging . If you have so much debt that you are unable to do basic tasks like reviewing each other's code then you need to stop making more debt and spend time paying it off.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/323477", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/191975/" ] }
323,514
Say you've just started working in a very small team on a {currently relatively small, though hopefully bigger later} project. Note that this is an actual project intended to be used by other developers in the real world, not some academic project that is meant to be scrapped at the end of a semester. However, the code is not yet released to others, so no decision is yet set in stone. The Methodologies One of you likes to begin coding and make the pieces fit together as you go before you necessarily have a clear idea of how exactly all the components will interact (bottom-up design). Another one of you likes to do the entire design first and nail down the details of all the components and communication before coding a solution. Assume that you are working on a new system rather than mimicking existing ones, and thus it is not always obvious what the right end-design should look like. So, on your team, different team members sometimes have different ideas of what requirements are even necessary for the final product, let alone how to go about designing it. When the bottom-up developer writes some code, the top-down developer rejects it because of potential future problems envisioned in the design despite the fact that the code may solve the problem at hand, believing that it is more important to get the design correct before attempting to code the solution to the problem. When the top-down developer tries to work out the full design and the envisioned problems before starting to write the code, the bottom-up developer rejects it because the bottom-up developer doesn't think some of the problems will actually arise in practice, and thinks that the design may need to be changed in the future when the requirements and constraints become clearer. The Problem The problem that this has resulted in is that bottom-up developer ends up wasting time because the top-down developer frequently decides the solution that the bottom-up developer has written should be scrapped due to a design flaw, resulting in the need to re-write the code. The top-down developer ends up wasting time because instead of parallelizing the work, the top-down developer now frequently sits down to work out the correct design with the bottom-up developer, serializing the two to the point where it may even be faster for 1 person to do the work than 2. Both of the developers want to keep working together, but it doesn't seem that the combination is actually helping either of them in practice. The Goals The common goals are obviously to maximize coding effectiveness (i.e. minimize time wastage) and to write useful software. The Question Put simply, how do you solve this problem and cope with this situation? The only efficient solution I can think of that doesn't waste time is to let each developer follow his/her own style for the design. But this is harder than it sounds when you code-review and actually need to approve of each others' changes, and when you're trying to design a coherent framework for others to use. Is there a better way?
Obviously they are both wrong. The bottom up guy is hacking away at code and will never produce something that does what it is supposed to do - it'll be a continual churn as the unknown requirements are determined. The top down guy can spend just as long on architectural vision and get nothing productive done too. However, a middle ground is ideal - if you know the goals you're working towards (which you get from a broad design work) and get on with coding it (without any detailed planning) then you reap the rewards of a system that is both organized and efficiently developed. It's called Agile by the way (not the BS version of agile that some people practice where procedures are more important than working software) but true agile that gets on with working towards a commonly-described and understood end goal. To fix the problem here, try an Agile approach (Kanban is probably the best) which will both force the top-down guy to do some work, and will force the bottom-up guy to do planning on what he's trying to achieve.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/323514", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/11833/" ] }
323,541
I have a Windows Form for creating configs. It has around 50 fields of data, which represent a group of entities, that I need to capture when the user presses the save button. Currently, on the press of the save button I build the group of Entities by creating each entity, extracting the information from the controls and then save them to the database. I am now working on functionality to import the partly built configs from XML. With the way the save button works I would be replicating almost all of the save code to achieve this. The only alternative I think of was to pass the values of the controls as parameters to a new class that will contain methods that build and return an entity. However, passing ~15 parameters to each entity creator method does not seem to be a clean solution. Is there any clean way of disconnecting the save logic from the WinForm that will enable me to reuse the save code?
Obviously they are both wrong. The bottom up guy is hacking away at code and will never produce something that does what it is supposed to do - it'll be a continual churn as the unknown requirements are determined. The top down guy can spend just as long on architectural vision and get nothing productive done too. However, a middle ground is ideal - if you know the goals you're working towards (which you get from a broad design work) and get on with coding it (without any detailed planning) then you reap the rewards of a system that is both organized and efficiently developed. It's called Agile by the way (not the BS version of agile that some people practice where procedures are more important than working software) but true agile that gets on with working towards a commonly-described and understood end goal. To fix the problem here, try an Agile approach (Kanban is probably the best) which will both force the top-down guy to do some work, and will force the bottom-up guy to do planning on what he's trying to achieve.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/323541", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/228146/" ] }
323,994
I have a method that updates employees data in the database. The Employee class is immutable, so "updating" the object means actually to instantiate a new object. I want the Update method to return a new instance of Employee with the updated data, but since now I can say the responsibility of the method is to update the employee data, and fetch from the database a new Employee object , does it violate the Single Responsibility Principle ? The DB record itself is updated. Then, a new object is instantiated to represent this record.
As always, this is a question of degree. The SRP should stop you from writing a method that retrieves a record from an external database, performs a fast Fourier transform on it and updates a global statistics registry with the result. I think almost everyone would agree these things should be done by different methods. Postulating a single responsibility for each method is simply the most economical and memorable way to make that point. At the other end of the spectrum are methods that yield information about the state of an object. The typical isActive has giving this information as its single responsibility. Probably everyone agrees that this is okay. Now, some extend the principle so far that they consider returning a success flag a different responsibility from performing the action whose success is being reported. Under an extremely strict interpretation, this is true, but since the alternative would be having to call a second method to obtain the success status, which complicates the caller, many programmers are perfectly fine with returning success codes from a method with side effects. Returning the new object is one step further on the road to multiple responsibilities. Requiring the caller to make a second call for an entire object is slightly more reasonable than requiring a second call just to see whether the first one succeeded or not. Still, many programmers would consider returning the result of an update perfectly fine. While this can be construed as two slightly different responsibilities, it is certainly not one of the egregious abuses that inspired the principle to begin with.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/323994", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/200887/" ] }
324,053
I don't know very well Python. I'm trying to understand more precisely what exact features of dynamic languages (à la Python, Lua, Scheme, Perl, Ruby, ....) are forcing their implementations to be slow. As a case in point, Lua 5.3 metatable machinery would intuitively make Lua quite slow, but in practice Lua is rumored to be quite fast (and faster than Python is). Also, I have the intuition (perhaps a wrong one) that since on current processors memory is much slower than raw computation (a memory access with a cache miss needs the same time as hundreds of arithmetic operations), dynamic type checking (à la if (value->type != INTEGER_TAG) return; in C parlance) could run quite fast. Of course, whole program analysis (like Stalin Scheme implementation is doing) can make a dynamic language implementation as a translator runs fast, but let's pretend I don't have time to design a whole program analyzer at first. (I'm sort of designing a dynamic language in my MELT monitor , and some of it would be translated to C)
What semantic features of Python (and other dynamic languages) contribute to its slowness? None. Performance of language implementations is a function of money, resources, and PhD theses, not language features. Self is much more dynamic than Smalltalk and slightly more dynamic than Python, Ruby, ECMAScript, or Lua, and it had a VM that outperformed all existing Lisp and Smalltalk VMs (in fact, the Self distribution shipped with a small Smalltalk interpreter written in Self, and even that was faster than most existing Smalltalk VMs), and was competitive with, and sometimes even faster than C++ implementations of the time. Then, Sun stopped funding Self, and IBM, Microsoft, Intel, and Co. started funding C++, and the trend reversed. The Self developers left Sun to start their own company, where they used the technology developed for the Self VM to build one of the fastest Smalltalk VMs ever (the Animorphic VM), and then Sun bought back that company, and a slightly modified version of that Smalltalk VM is now better known under the name of "HotSpot JVM". Ironically, Java programmers look down on dynamic languages for being "slow", when in fact, Java was slow until it adopted dynamic language technology. (Yes, that's right: the HotSpot JVM is essentially a Smalltalk VM. The bytecode verifier does a lot of type checking, but once the bytecode is accepted by the verifier, the VM, and especially the optimizer and the JIT don't actually do much of interest with the static types!) CPython simply doesn't do a lot of the stuff that makes dynamic languages (or rather dynamic dispatch) fast: dynamic compilation (JIT), dynamic optimization, speculative inlining, adaptive optimization, dynamic de-optimization, dynamic type feedback / inference. There's also the problem that almost the entire core and standard library is written in C, which means that even if you make Python 100x faster all of a sudden, it won't help you much, because something like 95% of code executed by a Python program is C, not Python. If everything were written in Python, even moderate speedups would create avalanche an effect, where the algorithms get faster, and the core datastructures get faster, but of course the core data structures are also used within the algorithms, and the core algorithms and core data structures are used everywhere else, and so on … There are a couple of things that are notoriously bad for memory-managed OO languages (dynamic or not) in today's systems. Virtual Memory and Memory Protection can be a killer for garbage collection performance in particular, and system performance in general. And it is completely unnecessary in a memory-safe language: why protect against illegal memory accesses when there aren't any memory accesses in the language to begin with? Azul have figured out to use modern powerful MMUs (Intel Nehalem and newer, and AMD's equivalent) to help garbage collection instead of hindering it, but even though it is supported by the CPU, the current memory subsystems of mainstream OS's aren't powerful enough to allow this (which is why Azul's JVM actually runs virtualized on the bare metal besides the OS, not within it). In the Singularity OS project, Microsoft have measured an impact of ~30% on system performance when using MMU protection instead of the type system for process separation. Another thing Azul noticed when building their specialized Java CPUs was that modern mainstream CPUs focus on the completely wrong thing when trying to reduce the cost of cache misses: they try to reduce the number of cache misses through such things as branch prediction, memory prefetching, and so on. But, in a heavily polymorphic OO program, the access patterns are basically pseudo-random, there simply is nothing to predict. So, all of those transistors are just wasted, and what one should do instead is reducing the cost of every individual cache miss. (The total cost is #misses * cost, mainstream tries to bring the first down, Azul the second.) Azul's Java Compute Accelerators could have 20000 concurrent cache misses in flight and still make progress. When Azul started, they thought they would take some off-the-shelf I/O components and design their own specialized CPU core, but what they actually ended up needing to do was the exact opposite: they took a rather standard off-the-shelf 3-address RISC core and designed their own memory controller, MMU, and cache subsystem. tl;dr : The "slowness" of Python is not a property of the language but a) its naive (primary) implementation, and b) the fact that modern CPUs and OSs are specifically designed to make C run fast, and the features they have for C are either not helping (cache) or even actively hurting (virtual memory) Python performance. And you can insert pretty much any memory-managed language with dynamic ad-hoc polymorphism here … when it comes to the challenges of an efficient implementation, even Python and Java are pretty much "the same language".
{ "source": [ "https://softwareengineering.stackexchange.com/questions/324053", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/40065/" ] }
324,082
I am programmer with 1 year experience, recently I realized I seldom start a project correctly (most of my side project), normally the project cycle goes like Start with a few use-cases Start coding Realize a few things I did not handle well, and does not fit well in current codebase. Rewrite most part of code and this might go a few times So my questions are Is such practice common, or it implies I am not competent? How can I improve myself on this aspect?
The cycle you describe is normal. The way to improve things is not to avoid this cycle, but to streamline it. The first step is to accept that: It's near impossible to know everything on day one of a project. Even if you do somehow know everything, by the time you've finished the project then something (the client's requirements, the market they're in, the tech you're working with, their customers' wishes) will have changed and made at least part of what you knew invalid or incorrect. Therefore, it's impossible to plan everything up front, and even if you could, following that plan would lead you to build something imperfect or obsolete. Knowing this, we integrate change into our planning. Let's look at your steps: Start with a few use-cases Start coding Realize a few things I did not handle well, and does not fit well in current codebase. Rewrite most part of code That's actually a great starting point. Here's how I'd approach it: 1. Start with a few use-cases Good. By saying "use cases", you're focusing on what the software is for . By saying "a few", you're not trying to discover everything; you're sticking to a manageable amount of work. All I'd add here is to prioritise them. With your client or end user, work out the answer to this question: What is the smallest, simplest piece of software I could give you that would improve your situation? This is your minimum viable product - anything smaller than this isn't helpful to your user, but anything bigger risks planning too much too soon. Get enough information to build this, then move on. Be mindful that you won't know everything at this point. 2. Start coding. Great. You get working as soon as possible. Until you've written code, your clients have received zero benefit. The more time you spend planning, the longer the client has spent waiting with no payback. Here, I'd add a reminder to write good code. Remember and follow the SOLID Principles , write decent unit tests around anything fragile or complex, make notes on anything you're likely to forget or that might cause problems later. You want to be structuring your code so that change won't cause problems. To do this, every time you make a decision to build something this way instead of that way, you structure your code so that as little code as possible is affected by that decision. In general, a good way to do this is to separate your code: use simple, discrete components (depending on your language and situation, this component might be a function, a class, an assembly, a module, a service, etc. You might also have a large component that is built out of smaller ones, like a class with lots of functions, or an assembly with lots of classes.) each component does one job, or jobs relating to one thing changes to the way one component does its internal workings should not cause other components to have to change components should be given things they use or depend on, rather than fetching or creating them components should give information to other components and ask them to do work, rather than fetching information and doing the work themselves components should not access, use, or depend upon the inner workings of other components - only use their publicly-accessible functions By doing this, you're isolating the effects of a change so that in most cases, you can fix a problem in one place, and the rest of your code doesn't notice. 3. Encounter issues or shortcomings in the design. This will happen. It is unavoidable. Accept this. When you hit one of these problems, decide what sort of problem it is. Some problems are issues in your code or design that make it hard to do what the software should do. For these problems, you need to go back and alter your design to fix the problem. Some problems are caused by not having enough information, or by having something that you didn't think of before. For these problems, you need to go back to your user or client, and ask them how they'd like to address the issue. When you have the answer, you then go and update your design to handle it. In both cases, you should be paying attention to what parts of your code had to change, and as you write more code, you should be thinking about which parts may have to change in the future. This makes it easier to work out what parts might be too interlinked, and what parts might need to be more isolated. 4. Rewrite part of the code Once you've identified how you need to change the code, you can go and make the change. If you've structured your code well, then this will usually involve changing only one component, but in some cases it might involve adding some components as well. If you find that you're having to change a lot of things in a lot of places, then think about why that is. Could you add a component that keeps all of this code inside itself, and then have all these places just use that component? If you can, do so, and next time you have to change this feature you'll be able to do it in one place. 5. Test A common cause of issues in software is not knowing the requirements well enough. This is often not the developers' fault - often, the user isn't sure what they need either. The easiest way to solve this is to reverse the question. Instead of asking "what do you need the software to do?", each time you go through these steps, give the user what you've built so far and ask them "I built this - does it do what you need?". If they say yes, then you've built something that solves their problem, and you can stop working! If they say no, then they'll be able to tell you in more specific terms what's wrong with your software, and you can go improve that specific thing and come back for more feedback. 6. Learn As you go through this cycle, pay attention to the problems you're finding and the changes you're making. Are there patterns? Can you improve? Some examples: If you keep finding you've overlooked a certain user's viewpoint, could you get that user to be more involved in the design phase? If you keep having to change things to be compatible with a technology, could you build something to interface between your code and that technology so you only have to change the interface? If the user keeps changing their mind about words, colours, pictures or other things in the UI, could you build a component that provides to the rest of the application those so that they're all in one place? If you find that a lot of your changes are in the same component, are you sure that component is sticking to just one job? Could you divide it into a few smaller pieces? Can you change this component without having to touch any others? Be Agile What you're moving towards here is a style of working known as Agile. Agile isn't a methodology, it's a family of methodologies incorporating a whole load of things (Scrum, XP, Kanban, to name a few) but the thing they all have in common is the idea that things change, and as software developers we should plan to adapt to changes rather than avoiding or ignoring them. Some of its core principles - in particular, the ones that are relevant to your situation - are the following: Don't plan further ahead than you can predict with confidence Make allowances for things to change as you go Rather than building something big in one go, build something small and then incrementally improve it Keep the end user involved in the process, and get prompt, regular feedback Examine your own work and progress, and learn from your mistakes
{ "source": [ "https://softwareengineering.stackexchange.com/questions/324082", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/134250/" ] }
324,151
I recently encountered a class which provides pretty much every single-character as a constant; everything from COMMA to BRACKET_OPEN . Wondering whether this was necessary; I read an "article" which suggests that it may be helpful to pull single-character literals into constants. So, I'm skeptical. The main appeal of using constants is they minimize maintenance when a change is needed. But when are we going to start using a different symbol than ',' to represent a comma? The only reason I see for using constants instead of literals is to make the code more readable. But is city + CharacterClass.COMMA + state (for example) really more readable than city + ',' + state ? For me the cons outweigh the pros, mainly that you introduce another class and another import. And I believe in less code where possible. So, I'm wondering what the general consensus is here.
Tautology : It is very clear if you read the very first sentence of the question that this question is not about appropriate uses like eliminating magic numbers , it is about terrible mindless foolish consistency at best. Which is what this answer addresses Common sense tells you that const char UPPER_CASE_A = 'A'; or const char A = 'A' does not add anything but maintenance and complexity to your system. const char STATUS_CODE.ARRIVED = 'A' is a different case. Constants are supposed to represent things that are immutable at runtime, but may need to be modified in the future at compile time. When would const char A = correctly equal anything other than A ? If you see public static final char COLON = ':' in Java code, find whomever wrote that and break their keyboards. If the representation for COLON ever changes from : you will have a maintenance nightmare. Obfuscation: What happens when someone changes it to COLON = '-' because where they are using it needs a - instead everywhere? Are you going to write unit tests that basically say assertThat(':' == COLON) for every single const reference to make sure they do not get changed? Only to have someone fix the test when they change them? If someone actually argues that public static final String EMPTY_STRING = ""; is useful and beneficial, you just qualified their knowledge and safely ignore them on everything else. Having every printable character available with a named version just demonstrates that whomever did it, is not qualified to be writing code unsupervised. Cohesion: It also artificially lowers cohesion, because it moves things away from the things that use them and are related to them. In computer programming, cohesion refers to the degree to which the elements of a module belong together. Thus, cohesion measures the strength of relationship between pieces of functionality within a given module. For example, in highly cohesive systems functionality is strongly related. Coupling: It also couples lots of unrelated classes together because they all end up referencing files that are not really related to what they do. Tight coupling is when a group of classes are highly dependent on one another. This scenario arises when a class assumes too many responsibilities, or when one concern is spread over many classes rather than having its own class. If you used a better name like DELIMITER = ',' you would still have the same problem, because the name is generic and carries no semantic. Reassigning the value does no more to help do an impact analysis than searching and replacing for the literal ',' . Because what is some code uses it and needs the , and some other code uses but needs ; now? Still have to look at every use manually and change them. In the Wild: I recently refactored a 1,000,000+ LOC application that was 18 years old. It had things like public static final COMMA = SPACE + "," + SPACE; . That is in no way better than just inlining " , " where it is needed. If you want to argue readability you need to learn you to configure your IDE to display whitespace characters where you can see them or whatever, that is just an extremely lazy reason to introduce entropy into a system. It also had , defined multiple times with multiple misspellings of the word COMMA in multiple packages and classes. With references to all the variations intermixed together in code. It was nothing short of a nightmare to try and fix something without breaking something completely unrelated. Same with the alphabet, there were multiple UPPER_CASE_A , A , UPPER_A , A_UPPER that most of the time were equal to A but in some cases were not . For almost every character, but not all characters. And from the edit histories it did not appear that a single one of these was ever edited or changed over the 18 years, because of what should now be obvious reason is it would break way too many things that were untraceable, thus you have new variable names pointing to the same thing that can never be changed for the same reason. In no sane reality can you argue that this practice is not doing anything but starting out at maximum entropy. I refactored all this mess out and inlined all the tautologies and the new college hires were much more productive because they did not have to hunt down through multiple levels of indirection what these const references actually pointed to, because they were not reliable in what they were named vs what they contained.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/324151", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/236187/" ] }
324,649
I very often work on some features of my project that I need to take a break before it is good enough for a commit. However, I use daily two different computers to code (my laptop and my research lab desktop). E.g.: I am working on a feature at home, then I stop and go to my lab. I don't want to mix cloud syncing (e.g. Dropbox) with GitHub remote tracking. I've simply committed unfinished (and messy) states of my code before (and pushed it) only for the purpose of pulling that in the other computer to continue the work. I am pretty sure this a bad practice. Today, though, I came across git stash after Googling a little bit. It seems as the perfect solution for what I need. However, the documentation doesn't say if it goes to github once I push my changes. Besides that, I want to know if there's a more efficient way to accomplish the mobility that I need. Thanks in advance!
I've simply committed unfinished (and messy) states of my code before (and pushed it) only for the purpose of pulling that in the other computer to continue the work. I am pretty sure this a bad practice. It's OK to commit messy unfinished work. Do your work in a topic branch. Commit early, and commit often. Read up on When to commit code? for some guidelines on when to make a commit. Specifically for Git, commit to a topic branch and push it as often as you want. If this topic branch is just meant for you, commit and push broken code. You should only defer from pushing broken code to a branch that is used by other people. Feel free to break your own code.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/324649", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/226010/" ] }
324,730
MVC is pretty straightforward. There is a Model, a Controller and a View. When we create a website, it all come together as ' client sends REST keyword request to server -> the server matches the requested URL to the controller action -> which then calls the model(s) for data gathering/processing, gets the result -> and returns the result back to the client as a HTML page (view) '. What if we are talking about a pure RESTful API web service? Then the flow with be something like ' client sends REST keyword request to server -> the server matches the requested URL to the controller action -> which then calls the model(s) for data gathering/processing, gets the result -> and returns the result back to the client in JSON '. Same as before, but there is no 'view'...or rather, the generated JSON can be thought of as a 'view'. In a sense, we are only utilizing the MC part of MVC. Is that how it is should be done? Or are there any other, better-suited patterns for a API-only service instead of MVC?
MVC is a paradigm from the Smalltalk world concerned with how object orientated systems could have UIs. Early web frameworks took the general idea (separate out business logic, controlling logic and view logic) and applied the principle to how they structured the web application. Before this it wasn't uncommon to have a mess of HTML generation code inside domain objects, or business logic inside HTML templates (think very early PHP) The thing is that the original MVC from the Smalltalk world isn't really what MVC is in most web frameworks. A HTML output isn't really a "view" in the sense that Smalltalk understood a UI screen to be. So that is the first reason not to get too hung up on whether you are following MVC properly. Hardly anything is. Take it less as a strict division and more a guideline of Hey wouldn't it be nice if our HTML templates weren't full of business logic. Secondly MVC is just a way of structuring server side code. It really has nothing to do with REST/HTTP. REST is concerned with how clients and servers communicate. It doesn't care if the representation the server sends to the client is in a HTML template that took a lot to construct with a templating engine, or a JSON object that was one call in the controller. If you don't think your server needs a "view" layer, that is fine. You will still gain benefit separating out your business logic (ie model) from the controllers that are handling a specific HTTP request, even if all the controller does it call a JSON parsing call on some object and return that data.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/324730", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/236770/" ] }
324,797
I read about this snafu: Programming bug costs Citigroup $7m after legit transactions mistaken for test data for 15 years . When the system was introduced in the mid-1990s, the program code filtered out any transactions that were given three-digit branch codes from 089 to 100 and used those prefixes for testing purposes. But in 1998, the company started using alphanumeric branch codes as it expanded its business. Among them were the codes 10B, 10C and so on, which the system treated as being within the excluded range, and so their transactions were removed from any reports sent to the SEC. (I think this illustrates that using a non-explicit data indicator is ... sub-optimal. It would have been much better to populate and use a semantically explicit Branch.IsLive property.) That aside, my first reaction was "Unit tests would have helped here"... but would they? I recently read Why most unit testing is waste with interest, and so my question is: what would the unit tests that would have failed on the introduction of alphanumeric branch codes look like?
Unit tests could have caught that the branch codes 10B and 10C were incorrectly classified as "testing branches", but I find it unlikely that the tests for that branch classification would have been extensive enough to catch that error. On the other hand, spot checks of the generated reports could have revealed that branched 10B and 10C were consistently missing from the reports much sooner than the 15 years that the bug was now allowed to remain present. Finally, this is a good illustration why it is a bad idea to mix testing data with the real production data in one database. If they had used a separate database that contains the testing data, there would not have been a need to filter that out of the official reports and it would have been impossible to filter out too much.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/324797", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/34982/" ] }
325,104
We keep being told we are going to be working in an agile way on a new project by senior management. They have set up stand-ups, sprint planning, retrospectives etc. etc. However, they have now come up with a plan detailing all work they want us to deliver with dates against each element and showcase dates again with what will be demoed in each one. This plan goes out to Q2 2017. To me this seems like Waterfall in the worst sense, a plan with no input from the technical team has been drawn up where certain stories on the plan are very unclear and none have been estimated by the dev team. However, I know their argument will be "senior stakeholders have to have dates and there has to be a plan, we can't just work from a backlog." To me this seems senior stakeholders have not bought into Agile and therefore we are doomed to fail implementing it at a lower level. Is this a fair judgement or am I overreacting to this plan!?
There's a difference between meeting the deadline and fulfilling all requirements. Its like the old adage "fast, good or cheap, pick two". So here you have fixed dates for delivery - that's good, it means you are time-boxed in that what you deliver at the end of your last sprint will be the final product. You remember that you always have to release working software at the end of each and every sprint don't you. What may happen is that the final software will be missing some features. Well, this happens with all development methodologies, waterfall included. All that will happen is that you'll be tasked with producing a patch release afterwards, or a version 2. That assumes your final delivery is good enough of course! So fixed dates are not a non-agile way of working. Agile doesn't mean there's an unlimited budget for you to play with your new planning tools. It does mean you'll have to focus on delivery, that's never a bad thing.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/325104", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/174177/" ] }
325,601
I wondered whether a while loop is intrinsically a recursion? I think it is because a while loop can be seen as a function that calls itself at the end. If it is not recursion, then what is the difference?
Loops are very much not recursion. In fact, they are the prime example of the opposite mechanism: iteration . The point of recursion is that one element of processing calls another instance of itself. The loop control machinery merely jumps back to the point where it started. Jumping around in code and calling another block of code are different operations. For instance, when you jump to the start of the loop, the loop control variable still has the same value it had before the jump. But if you call another instance of the routine you're in, then the new instance has new, unrelated copies of all of its variables. Effectively, one variable can have one value on the first level of processing and another value on a lower level. This capability is crucial for many recursive algorithms to work, and this is why you can't emulate recursion via iteration without also managing a stack of called frames which keeps track of all those values.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/325601", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/238424/" ] }
325,806
I want to expose a resource on the web. I want to protect this resource: to make sure it is only accessible to certain individuals. I could set up some kind of password-based authentication . For example, I could only allow access to the resource through a web server that checks incoming requests for correct credentials (perhaps against some backing database of users) before serving up the file. Alternately I could just use a private URL . That is, I could simply host the resource at some impossible-to-guess path, for example https://example.com/23idcojj20ijf... , which restricts access to those who know the exact string. From the perspective of an evildoer who wants to access this resource, are these approaches equivalent? If not, what makes them different? And as far as maintainability, are there pros and cons to either approach that I should be aware of before implementing one or the other?
A private URL is somewhat weaker than authentication with credentials, even if the bit size of the URL is the same as that of the credentials. The reason is the URL may more easily "leak". It is cached in the browser, logged on the server and so on. If you have outbound links, the private URL may show up in the referrer header on other sites. (It can also be seen by people looking over your shoulder.) If it leaks (by accident or due to carelessness by the user), it may end up being public and even indexed by Google, which would allow an attacker to easily search for all leaked URLs to your site! For this reason, private URLs are typically used only for one-shot operations like password resets, and typically they are only active for a limited time. There is a related thread over at Information security: Are random URLs a safe way to protect profile photos ? - one answer shares this story: Dropbox disables old shared links after tax returns end up on Google . So it is not just a theoretical risk.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/325806", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/92609/" ] }
325,807
I am writing a program in Python, which basically manipulates strings, and I was wondering whether I should do it using OOP principles or not. The client did tell me he doesn't care about the code, he just want the thing done . I know that object-oriented code is not by definition cleaner, and conversely non-OO code is not by definition crappy. The question I'm asking might be more or less opinion-based but there might be some rules that I'm not aware of. Some more info about what's to be done: parse a .csv file and process the data based on a config file (the columns may be different - like in the number of columns or the data they hold) use the above processed data to create a new custom formatted data (or multiple files based on some of the above values) use the last formatted data to create an XML file. split the XML file in multiple XML s based on their content the application should be CLI-based there are of course other things like: logging some events, parse CLI arguments, and so on. Now, this isn't at all a big / hard application, and it's also almost finished but during the whole development process I was keep asking myself if this should be done using OOP or not. So, my question would be: how do you guys, know/decide when to use OOP in an application?
Python is a multi-paradigm language which means you can choose the paradigm most appropriate for the task. Some languages like Java are single-paradigm OO which means you will get headaches if you try to use any other paradigm. Posters saying "always use OO" are probably coming from a background in such a language. But fortunately you have a choice! I note your program is a CLI app which reads some input (csv and config files) and produces some output (xml files), but is not interactive and hence does not have a stateful GUI or API. Such a program is naturally expressed as a function from input to output, which delegate to other functions for subtasks. OO on the other hand is about encapsulating mutable state and is therefore more appropriate for interactive applications, GUI's, and API's exposing mutable state. It is no coincidence that OO was developed in parallel with the first GUI's. OO has another advantage in that polymorphism allows you a more loosely coupled architecture, where different implementations of the same interface can be easily substituted. Combined with dependency injection this can allow configuration-based loading of dependencies and other cool stuff. This is mostly appropriate for very large applications though. For a program the size of what you describe, it would be far to much overhead with no apparent benefit. Apart from the functions actually reading and writing the files, the bulk of your logic can be written as side-effect free functions which takes some input and return some other output. This is eminently easy to test, much simpler than testing OO units where you need to mock dependencies and whatnot. Bottom line: I suggest a bunch of functions split into modules for organization, but no objects.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/325807", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/217868/" ] }
325,845
if i>0 : return sqrt(i) elif i==0: return 0 else : return 1j * sqrt(-i) VS if i>0: return sqrt(i) elif i==0: return 0 else: return 1j * sqrt(-i) Given the above examples, I don't understand why I virtually never see the first style in code bases. To me, you turn the code into a tabular format that shows clearly what you want. First column can virtually be ignored. Second column identifies the condition and the third column gives you the output you want. It seems, at least to me, straight-forward and easy to read. Yet I always see this simple kind of case/switch situation come out in the extended, tab indented format. Why is that? Do people find the second format more readable? The only case where this could be problematic is if the code changes and gets longer. In that case, I think it's perfectly reasonable to refactor the code into the long, indented format. Does everyone do it the second way simply because it's the way it always was done? Being a devil's advocate, I guess another reason might be because people find two different formats depending upon complexity of the if/else statements to be confusing? Any insight would be appreciated.
One reason may be that you're not using languages where it's popular. A few counter-examples: Haskell with guards and with patterns: sign x | x > 0 = 1 | x == 0 = 0 | x < 0 = -1 take 0 _ = [] take _ [] = [] take n (x:xs) = x : take (n-1) xs Erlang with patterns: insert(X,Set) -> case lists:member(X,Set) of true -> Set; false -> [X|Set] end. Emacs lisp: (pcase (get-return-code x) (`success (message "Done!")) (`would-block (message "Sorry, can't do it now")) (`read-only (message "The shmliblick is read-only")) (`access-denied (message "You do not have the needed rights")) (code (message "Unknown return code %S" code))) Generally I see the table format is pretty popular with functional languages (and in general expression based ones), while breaking the lines is most popular in others (mostly statement based).
{ "source": [ "https://softwareengineering.stackexchange.com/questions/325845", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/181720/" ] }
325,854
I'm currently at the design stage of a component-based architecture in C++. My current design includes the usage of features such as: std::vector s of std::shared_ptr s to hold the components std::dynamic_pointer_cast std::unordered_map<std::string,[yada]> Components will represent data and logic of various items that are needed in a game-like software, such as Graphics, Physics, AI, Audio, etc. I've read all over the place that cache misses are hard on performance, so I ran some tests, which led me to believe that, indeed, it can slow down an application. I haven't been able to test the aforementioned language features, but it is said in many places that these tend to cost a lot and should be avoided if possible. Since I'm at the design stage of the architecture, and these will be included in the core of the design, should I try to find ways to avoid them now, since it's going to be very hard to change it later if there are performance issues? Or I'm just caught in doing premature optimization?
Without reading anything but the title: Yes. After reading the text: Yes. Though it is true that maps and shared pointers etc. do not perform well cache-wise, you will most certainly find that what you want to use them for — as far as I understand — is not the bottleneck and will not be held in or use cache efficiently regardless of the data structure. Write the software avoiding the most stupid mistakes, then test, then find the bottlenecks, then optimize! Fwiw: https://xkcd.com/1691/
{ "source": [ "https://softwareengineering.stackexchange.com/questions/325854", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/173906/" ] }
325,889
I have a CPP class whose constructor does some operations. Some of these operations may fail. I know that constructors do not return anything. My questions are, Is it allowed to do some operations other that initializing members in a constructor? Is it possible to tell the calling function that some operations in constructor has been failed? Can I make new ClassName() return NULL if some errors occur in constructor?
Yes, though some coding standards may prohibit it. Yes. The recommended way is to throw an exception. Alternatively, you can store the error information inside the object and provide methods to access this information. No.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/325889", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/238676/" ] }
326,255
I'm about to start creating a music project website for a friend. It should be pretty simple for now: no dynamic content (tour dates, etc.), and nothing more than a few embedded sample songs or SoundCloud links. I'm not expecting to use anything more than vanilla JavaScript and Bootstrap or Foundation for a responsive grid. Is this enough however? Can I simply upload HTML, CSS, and JS files to a host and be done with it, or should I take the time to program a backend server in Node or PHP?
If you don't know whether you need server-side code, you probably don’t* *Caveat : Server-side code is essential for security, when you want to internally control access to content, data, or functionality. (It does not necessarily need to be your server, see last paragraph.) Ask yourself what problem using server-side technologies would solve. If you can’t think of any (and in your case, I can’t either) then you don't need them. Be aware that a lot more than you might think is possible using just client-side code. JavaScript frameworks like AngularJS or ReactJS can let you integrate with third party, dynamic content through API’s using Ajax. (This includes hooking into an API that could handle its own security.)
{ "source": [ "https://softwareengineering.stackexchange.com/questions/326255", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/239375/" ] }
326,485
So I am new to agile, but not test-driven development . My professors in college were all about the idea of tests then code then tests. I am not sure I understand why. From my perspective it is a lot of upfront cost that will most likely be changed as your code evolves. This is how I imagine TDD and why it confuses me. If I were to build a house as a TDD contractor. Give me all of your specs (stories). Get approval on the specs. Break down all the specs into inspection I think I will need (see into the future). Call an inspector to look at those points and tell me right now I am failing the inspection (gee thanks). Start building the house. Call the inspector back out daily (passing 2/100). Oh shoot, there was an issue with my understanding and now I need to add 9 more inspection and change 27 of them. Call inspector passing 1/109. Damn it. Why doesn't the inspector like this one... oh I updated that method name... Build some more. UGGGGHHHH MORE CHANGES let me update the damn inspector. Oh I am failing no s**t. Am I done yet? Okay, that may be outlandish, but I just do not see how I should know all my methods and how things will work until my code is there. 99% of the time I have to go back and update a unit test any ways and add more as I go. It just seems backwards. What seems more appropriate is DDT or development-driven testing which is a thing the community has all but forgotten about it seems. From my understanding DDT for a house would look like: Give me all of your specs (stories). Get approval on the specs and break them out. Start a unit (the foundation). Take notes (comments) of some tricky logic. At the end before beginning the next unit have the inspection (create a test). Fix any issue found and inspect again. Approved this unit move onto the next. If we are all being honest doesn't that sound more human and centered on the developer and business? It seems like changes can be made faster and without the overhead TDD seems to create.
One of the benefits of a TDD approach is only realised when you also do emergent design. So in your first analogy, you wouldn't write 100 tests, as there's no possible way that you'll know what your software will look like. You write one test. You run it. It fails. You write the smallest unit of code to make your test pass. Then you run your test again. It passes. Now write the next test, repeating the process above. This might seem like a wasteful approach at the very beginning, when it's obvious what your code is meant to do, but the great thing with this approach is your test coverage is always high, and the code design is cleaner this way. As a method, it goes hand-in-hand with pair programming; one pair writes the test, the next writes the code to make it pass, then writes the next test, and so on. I even use this approach when writing a new class; the first test is a call to initiate the class constructor. But I haven't yet written the class, so it fails. Next I write the simple, empty class, and my first test passes. Once you get into the mindset, it is very difficult to not be in it and code the "old fashioned" way. I'd recommend working in a good Agile environment to learn this, or reading a few good Agile books (Clean Code and Clean Coder are both very good) to better understand.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/326485", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/238259/" ] }
327,806
Title says it all. Can't seem to find what is most common out in the world. I'm brand new to programming. If the answer is "it depends", I would love to know what it depends on. Here to learn.
Stick to the convention of the language and the framework - in your case, React.js seems to be follows the Javascript conventions so there is no conflict. camelCase for variables and functions, PascalCase for types(classes), and UPPERCASE_SNAKE_CASE for constants.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/327806", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/241275/" ] }
327,994
I am writing user documentation (an SOP) that involves third party programs that I am trying to describe well. One such program is a server that offers little indication of it's startup besides a graphic that shows during its initialization/startup routine. As a developer, I have used this window as a quick status indicator and I would like to convey this to my audience (operators/engineers), but I have no idea what it is called. My first question is whether there is a formal or widely accepted name for a graphic shown at startup (examples below). Second, what is a preferable way to refer to this that will convey the idea quickly (and without graphics) to my audience? Examples |
Those are usually called Splash Screens . A splash screen is a graphical control element consisting of window containing an image, a logo and the current version of the software. A splash screen usually appears while a game or program is launching. Splash screens are typically used by particularly large applications to notify the user that the program is in the process of loading. They provide feedback that a lengthy process is underway. Occasionally, a progress bar within the splash screen indicates the loading progress. A splash screen disappears when the application's main window appears. Source https://en.wikipedia.org/wiki/Splash_screen It seems that another term, loading screen , is only used in the case of video games. In the case of a whole operating system, they're called bootsplash or bootscreen . EDIT: Whence the term " splash screen "? Here's a question about the etymology in our sister site english.stackexchange.com.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/327994", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/235286/" ] }
328,055
Why is this OK and mostly expected: abstract type Shape { abstract number Area(); } concrete type Triangle : Shape { concrete number Area() { //... } } ...while this is not OK and nobody complains: concrete type Name : string { } concrete type Index : int { } concrete type Quantity : int { } My motivation is maximising the use of type system for compile-time correctness verification. PS: yes, I have read this and wrapping is a hacky work-around.
I assume you are thinking of languages like Java and C#? In those languages primitives (like int ) are basically a compromise for performance. They don't support all features of objects, but they are faster and with less overhead. In order for objects to support inheritance, each instance need to "know" at runtime which class it is an instance of. Otherwise overridden methods cannot be resolved at runtime. For objects this means instance data is stored in memory along with a pointer to the class object. If such info should also be stored along with primitive values, the memory requirements would balloon. A 16 bit integer value would require its 16 bits for the value and additionally 32 or 64 bit memory for a pointer to its class. Apart from the memory overhead, you would also expect to be able to override common operations on primitives like arithmetic operators. Without subtyping, operators like + can be compiled down to a simple machine code instruction. If it could be overridden, you would need to resolve methods at runtime, a much more costly operation. (You may know that C# supports operator overloading - but this is not the same. Operator overloading is resolved at compile time, so there is no default runtime penalty.) Strings are not primitives but they are still "special" in how they are represented in memory. For example they are "interned", which means two strings literals which are equal can be optimized to the same reference. This would not be possible (or a least a lot less effective) if string instances should also keep track of the class. What you describe would certainly be useful, but supporting it would require a performance overhead for every use of primitives and strings, even when they don't take advantage of inheritance. The language Smalltalk does (I believe) allow subclassing of integers. But when Java was designed, Smalltalk was considered too slow, and the overhead of having everything be an object was considered one of the main reasons. Java sacrificed some elegance and conceptual purity to get better performance.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/328055", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/13154/" ] }
328,203
In this MSDN article , the following example code is provided (slightly edited for brevity): public async Task<ActionResult> Details(int? id) { if (id == null) { return new HttpStatusCodeResult(HttpStatusCode.BadRequest); } Department department = await db.Departments.FindAsync(id); if (department == null) { return HttpNotFound(); } return View(department); } The FindAsync method retrieves a Department object by its ID, and returns a Task<Department> . Then the department is immediately checked to see if it is null. As I understand it, asking for the Task's value in this manner will block code execution until the value from the awaited method is returned, effectively making this a synchronous call. Why would you ever do this? Wouldn't it be simpler to just call the synchronous method Find(id) , if you're going to immediately block anyway?
As I understand it, asking for the Task's value in this manner will block code execution until the value from the awaited method is returned, effectively making this a synchronous call. Not quite. When you call await db.Departments.FindAsync(id) the task is sent off and the current thread is returned to the pool for use by other operations. The flow of execution is blocked (as it would be regardless of using department right after, if I understand things correctly), but the thread itself is free to be used by other things while you wait for the operation to be completed off-machine (and signalled by an event or completion port). If you called d.Departments.Find(id) then the thread does sit there and wait for the response, even though most of the processing is being done on the DB. You're effectively freeing up CPU resources when disk bound.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/328203", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/1204/" ] }
328,458
In my databases, I tend to get into the habit of having an auto-incrementing integer primary key with the name id for every table I make so that I have a unique lookup for any particular row. Is this considered a bad idea? Are there any drawbacks to doing it this way? Sometimes I'll have multiple indices like id, profile_id, subscriptions where id is the unique identifier, profile_id links to the foreign id of a Profile table, etc. Or are there scenarios where you don't want to add such a field?
It's never a bad idea to have a guaranteed unique row identifier. I guess I shouldn't say never – but let's go with the overwhelming majority of the time it's a good idea. Theoretical potential downsides include an extra index to maintain and extra storage space used. That's never been enough of a reason to me to not use one.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/328458", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/222021/" ] }
328,512
One of the most basic and widely accepted principles of software development is DRY (don't repeat yourself). It is also clear that most software projects require some kind of management. Now what are the tasks that are easy to manage (estimate, schedule, control)? Right, repetitive tasks, exactly the tasks that should be avoided according to DRY. So from a project management perspective, it is great to solve a task by copying some existing code 100 times and make some minor adaptations to each copy, as required. At all times, you know exactly how much work you have done and how much is left. All managers will love you. If instead, you apply the DRY principle and try to find an abstraction that more or less eliminates the duplicate code, things are different. Usually there are many possibilities, you have to make decisions, do research, be creative. You might come up with a better solution in shorter time, but you might also fail. Most of the time, you cannot really say how much work is left. You are a project manager's worst nightmare. Of course I am exaggerating, but there is obviously a dilemma. My questions are: What are criteria to decide if a developer is overdoing DRY? How can we find a good compromise? Or is there a way to completely overcome this dilemma, not just by finding a compromise? Note: This question is based on the same idea as my previous one, Amount of routine work in software development and its effect on estimation , but I think it makes my point clearer, so sorry for repeating myself :).
You seem to assume the primary objective of project management is to produce exact estimates. This is not the case. The primary objective of project management is the same as for developers: To deliver value for the product owner. A product using a lot of slow manual processes rather than automation might in theory be easier to estimate (although I doubt it), but it does not provide value-for-money to the customer, so it is simply bad project management. There is no dilemma. It is well known that estimation of software projects is hard, and numerous books have been written and various processes have been developed to manage it. If the only objective of the PM was to produce exact estimates, then it would be easy. Just pad the estimates to 10X, and let the developers play games for the rest if they finish early. This would actually be better than your suggestion to use copy-paste busywork to pad the time, since playing games will not reduce the maintainability of the product. But in reality, the product owner want useful estimates and a quality product delivered as quickly and cheaply as possible. These are the actual constraints a PM will have to navigate. In any case, I dispute your assumption that repetitive manual work is more predictable than automated. All experience shows that repetitive manual work is more prone to errors. And what if a bug is discovered in the copy-pasted code? Suddenly the cost of fixing a bug is multiplied with the amount of repetition, which makes the uncertainty explode.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/328512", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/217956/" ] }
328,571
When following Domain-driven design (DDD), is it correct for a root aggregate to hold a reference to an internal entity that happens to be the root entity on a separate aggregate? I believe this is not correct, mainly because of this rule on the blue book : Nothing outside the AGGREGATE boundary can hold a reference to anything inside, except to the root ENTITY. The root ENTITY can hand references to the internal ENTITIES to other objects, but those objects can use them only transiently, and they may not hold on to the reference. The root may hand a copy of a VALUE OBJECT to another object, and it doesn't matter what happens to it, because it's just a VALUE and no longer will have any association with the AGGREGATE. If a root aggregate holds a reference to another root aggregate the boundary of the former is violated and the whole concept of an aggregate is corrupted, so I believe if a root aggregate looks like needing to hold a reference to another root aggregate, then I need to create a different entity, that will probably share some of the same members as the other root entity, but will not have a global identity, as this other rule in the book states: Root ENTITIES have global identity. ENTITIES inside the boundary have local identity, unique only within the AGGREGATE. I believe this would be the correct way to go, but since it feels repetitive and redundant (when taken off the context of DDD, with pure OOP) I am asking for some feedback.
You might be overinterpreting the book. It basically says : anything outside an Aggregate cannot hold a reference to anything inside it except the root. Therefore, holding a reference to a root is legit. Holding a reference to a root doesn't mean it's part of your own aggregate and that you can control its invariants. It keeps its own invariants and autonomy. However, A commonly accepted good practice is to refer to an AR by storing its ID, not a full reference. More modern approaches to aggregate design (see the Red Book ) advocate a cleaner separation between Aggregates. A business transaction should only change the state of a single Aggregate. Under this assumption, the need to store a reference to another Aggregate tends to disappear because you're not going to modify 2 aggregates at the same time. is it correct for a root aggregate to hold a reference to an internal entity that happens to be the root entity on a separate aggregate? This never happens. A Value Object can be part of multiple Aggregates, but not an Entity. The reason is, nothing would then prevent you from sharing the same entity instance between Aggregates. Let's say that entity instance E belongs to both aggregate instances A and B. Since the premise of DDD is that the Aggregate is the entry point, you would be able to load A, modify entity E through it, all the while silently violating invariants from B (that you didn't load). See the answer from Greg Young here : http://domain-driven-design.3010926.n2.nabble.com/Can-an-Entity-be-Shared-across-many-Aggregates-td7579277.html
{ "source": [ "https://softwareengineering.stackexchange.com/questions/328571", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/160885/" ] }
328,812
I've been working with javascript for the past 4 years. I'm very confident about my problem solving skills and I can see that my code quality is improving. I try to stay up to date with the community and I'm currently working with ES2015 and React.js. However, I feel like I can't grasp programming design patterns at all. I know where to find resources about this and I have already read books about it. I rely on my senior coworkers to make decisions about the project structure but I have no problem working on it. Whenever I need to start something on my own I look for these two paths: If I'm using a big library/framework like React.js, I tend to copy what the community is doing; If I'm on something smaller I will be using the module pattern. I know that once I get a better understanding on this subject I will be able to make better decisions, but for now I'm completely lost. Should I look for superior education on this? Do I need a mentor on this subject? Am I just stupid? Is this really that hard to understand?
Software Design patterns are well-known solutions to well-known problems. The way you understand them is by learning the patterns, understanding how they work, and knowing when it is appropriate to apply each one to your software design. The way you learn software design patterns is by studying them, one at a time. It's a continuous education process. If you want to reduce the learning footprint, study those patterns that relate directly to the technologies that you are currently using. Some important things to know about design patterns: Some design patterns are architectural in nature. MVC and MVVM are examples of such patterns. You use such patterns when you need the organizational and structural benefits that they provide. Some design patterns are workarounds for deficiencies in programming languages. You won't need these patterns if you use a more expressive programming language, but often you don't get to make this choice. A majority of the GoF patterns are in this category . Use a software pattern only when you're trying to solve the problem that the pattern is specifically designed to solve. If you're writing an application by stitching together software patterns, you're doing it wrong. There isn't an existing software pattern for every computing problem in existence. Were that the case, programming would merely be a pattern-matching exercise. Some patterns are actually anti-patterns. The additional complexity that these patterns introduce outweighs the benefits that they provide. You'll have to decide for yourself, on a pattern-by-pattern basis, which of these patterns you'll avoid.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/328812", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/242855/" ] }
328,820
If we have Table A that has a one to one relationship with Table B, does it ever make sense to keep them apart? Or does it never hurt to combine them into a single table? Do either of these scenarios (two tables vs one combined table) impact anything with respect to its normal form (1NF, 2NF, 3NF, etc)?
Yes, there are tons of reasons why this may be the better design. You may have an inheritence/extension relationship, e.g. you might have a User table and then an Administrator table which has more fields. Both tables may have a primary key of User ID (and therefore have a 1:1 relationship) but not all users will have a record in the Administrator table. You would need something similar if you are supporting a workflow, e.g. a ScheduledTask table and CompletedTask table. You may want to have a lightweight table for commonly-used data User and then a larger table for details you don't need very often UserDetails . This can improve performance because you'll be able to fit more records into a single data page. You may want different permissions to the tables, e.g. User and UserCredentials You may want different backup strategies and therefore put two tables on different partitions, e.g. Transaction and TransactionArchive You may need more columns than can be supported in a single table, e.g. if there are a lot of large text columns that you need to be able to index and your DB platform is limited to 4K data pages or whathaveyou.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/328820", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/206468/" ] }
329,098
Uncle Bob's chapter on names in Clean Code recommends that you avoid encodings in names, mainly regarding Hungarian notation. He also specifically mentions removing the I prefix from interfaces, but doesn't show examples of this. Let's assume the following: Interface usage is mainly to achieve testability through dependency injection In many cases, this leads to having a single interface with a single implementer So, for example, what should these two be named? Parser and ConcreteParser ? Parser and ParserImplementation ? public interface IParser { string Parse(string content); string Parse(FileInfo path); } public class Parser : IParser { // Implementations } Or should I ignore this suggestion in single-implementation cases like this?
Whilst many, including "Uncle Bob", advise not to use I as a prefix for interfaces, doing so is a well-established tradition with C#. In general terms, it should be avoided. But if you are writing C#, you really should follow that language's conventions and use it. Not doing so will cause huge confusion with anyone else familiar with C# who tries to read your code.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/329098", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/2018/" ] }
329,229
Say that I have a REST endpoint that takes an integer as a parameter: /makeWaffles?numberOfWaffles=3 In this case, I want the number to be positive because I can't make a negative number of waffles (and requesting 0 waffles is a waste of time). So I want to reject any request that does not contain a positive integer. I also want to reject a request that exceeds some maximum integer (let's say for now that it's MAX_INTEGER). In the event that someone requests a non-positive number of waffles, should I return an HTTP 400 (Bad Request) status? My initial thought is yes: it is not a valid number for me to complete the request. However, the RFC doesn't mention business rules as a reason to throw it: The 400 (Bad Request) status code indicates that the server cannot or will not process the request due to something that is perceived to be a client error (e.g., malformed request syntax, invalid request message framing, or deceptive request routing). A business rule doesn't fall under any of those three examples. It's syntactically correct, it's properly framed, and it's not deceptive request routing. So should I return an HTTP 400 (Bad Request) status if a parameter is syntactically correct, but violates a business rule? Or is there a more appropriate status to return?
This is a great question, and still highly relevant given the historical context (and seemingly contradictory definitions) of the HTTP return codes. Even among the answers to this question there are conflicting definitions. This can be clarified by moving chronologically. RFC 2616 (June 1999) 10.4.1 400 Bad Request The request could not be understood by the server due to malformed syntax. The client SHOULD NOT repeat the request without modifications. As of this RFC, this status code specifically applied only to syntactically invalid requests. There was a gap in the status codes for semantic validation. Thus, when RFC 4918 came around, a new code was born. RFC 4918 (June 2007) 11.2. 422 Unprocessable Entity The 422 (Unprocessable Entity) status code means the server understands the content type of the request entity (hence a 415(Unsupported Media Type) status code is inappropriate), and the syntax of the request entity is correct (thus a 400 (Bad Request) status code is inappropriate) but was unable to process the contained instructions. For example, this error condition may occur if an XML request body contains well-formed (i.e., syntactically correct), but semantically erroneous, XML instructions. 422 Unprocessable Entity was created to fill the gap of semantic validation in the original specification of the 4xx status codes. However, another relevant RFC came about in 2014 which generalized 400 to no longer be specific to syntax . RFC 7231 (June 2014, explicitly obsoletes RFC 2616) 6.5.1. 400 Bad Request The 400 (Bad Request) status code indicates that the server cannot or will not process the request due to something that is perceived to be a client error (e.g., malformed request syntax, invalid request message framing, or deceptive request routing). Note that the 422 description says that the reason 400 is inappropriate is because 400 (as of RFC 2616) should be returned only for bad request syntax. However, as of RFC 7231, the strict syntax-error definition no longer applies to 400 . Back to the question at hand: While 422 is technically more specific, given this context, I could see either 400 or 422 being used for semantic validation of API parameters. I'm hesitant to use 422 in my own APIs because the definition of 422 is technically outdated at this point (although I don't know if that's officially recognized anywhere). The article referenced in Fran's answer that encourages the use of 422 was written in 2012, two years before RFC 7231 clarified HTTP 400. Just be sure to standardize on one or the other.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/329229", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/81973/" ] }
329,278
I have always known that goto is something bad, locked in a basement somewhere never to be seen for good but I ran into a code example today that makes perfect sense to use goto . I have an IP where I need to check if is within a list of IPs and then proceed with the code, otherwise throw an exception. <?php $ip = '192.168.1.5'; $ips = [ '192.168.1.3', '192.168.1.4', '192.168.1.5', ]; foreach ($ips as $i) { if ($ip === $i) { goto allowed; } } throw new Exception('Not allowed'); allowed: ... If I don't use goto then I have to use some variable like $allowed = false; foreach ($ips as $i) { if ($ip === $i) { $allowed = true; break; } } if (!$allowed) { throw new Exception('Not allowed'); } My question is what's so bad with goto when it's used for such obvious and imo relevant cases?
GOTO itself is not an immediate problem, it's the implicit state machines that people tend to implement with it. In your case, you want code that checks whether the IP address is in the list of allowed addresses, hence if (!contains($ips, $ip)) throw new Exception('Not allowed'); so your code wants to check a condition. The algorithm to implement this check should be of no concern here, in the mental space of your main program the check is atomic. That's how it should be. But if you put the code that does the check into your main program, you lose that. You introduce mutable state, either explicitly: $list_contains_ip = undef; # STATE: we don't know yet foreach ($ips as $i) { if ($ip === $i) { $list_contains_ip = true; # STATE: positive break; } # STATE: we still don't know yet, huh? # Well, then... $list_contains_ip = false; # STATE: negative } if (!$list_contains_ip) { throw new Exception('Not allowed'); } where $list_contains_ip is your only state variable, or implicitly: # STATE: unknown foreach ($ips as $i) { # What are we checking here anyway? if ($ip === $i) { goto allowed; # STATE: positive } # STATE: unknown } # guess this means STATE: negative throw new Exception('Not allowed'); allowed: # Guess we jumped over the trap door As you see, there's an undeclared state variable in the GOTO construct. That's not a problem per se, but these state variables are like pebbles: carrying one is not hard, carrying a bag full of them will make you sweat. Your code will not stay the same: next month you'll be asked to differentiate between private and public addresses. The month after that, your code will need to support IP ranges. Next year, someone will ask you to support IPv6 addresses. In no time, your code will look like this: if ($ip =~ /:/) goto IP_V6; if ($ip =~ /\//) goto IP_RANGE; if ($ip =~ /^10\./) goto IP_IS_PRIVATE; foreach ($ips as $i) { ... } IP_IS_PRIVATE: foreach ($ip_priv as $i) { ... } IP_V6: foreach ($ipv6 as $i) { ... } IP_RANGE: # i don't even want to know how you'd implement that ALLOWED: # Wait, is this code even correct? # There seems to be a bug in here. And whoever has to debug that code will curse you and your children. Dijkstra puts it like this: The unbridled use of the go to statement has as an immediate consequence that it becomes terribly hard to find a meaningful set of coordinates in which to describe the process progress. And that's why GOTO is considered harmful.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/329278", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/143301/" ] }
329,463
Some people have this problem that they cannot think without words. And writing down their thoughts and decisions is the most effective way to proceed. So - is it normal and acceptable that I write down my thoughts and decisions in some Notepad++ file during coding? Sometimes it should be acceptable, e.g. when recreating technical documentation or reasoning about more complex algorithms, but sometimes it may be strange, e.g. when I am considering design options and trying to make judgment. The impact of this practice on productivity is unclear. From the one side - reasoning with inner words may be faster than with the written words. From the other side - more complex problems requires writing. Besides, if one is stuck with more design options, then the feeling is better when the decision is written, so it rises morale.
Not only is it normal, it's a good idea. There's a famous quote "Give me six hours to chop down a tree and I will spend the first four sharpening the axe". Taking the time to organize your thoughts and plan your work before coding is time well spent. Putting those thoughts on paper will give you time to reflect on your plans, critique them, and organize them in ways that would be very difficult if done only "in your head".
{ "source": [ "https://softwareengineering.stackexchange.com/questions/329463", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/229289/" ] }
329,596
I am building an API where the user can ask the server to perform multiple actions in one HTTP request. The result is returned as a JSON array, with one entry per action. Each of these actions might fail or succeed independently of each other. For instance, the first action might succeed, the input to the second action might be poorly formatted and fail to validate and the third action might cause an unexpected error. If there was one request per action, I would return status codes 200, 422 and 500 respectively. But now when there is only one request, what status code should I return? Some options: Always return 200, and give more detailed information in the body. Maybe follow the above rule only when there is more than one action in the request? Maybe return 200 if all requests succeed, otherwise 500 (or some other code)? Just use one request per action, and accept the extra overhead. Something completely different?
My vote would be to split these tasks into separate requests. However if too many round trips are a concern, I did come across HTTP response code 207 - Multi-Status Copy/paste from this link: A Multi-Status response conveys information about multiple resources in situations where multiple status codes might be appropriate. The default Multi-Status response body is a text/xml or application/xml HTTP entity with a 'multistatus' root element. Further elements contain 200, 300, 400, and 500 series status codes generated during the method invocation. 100 series status codes SHOULD NOT be recorded in a 'response' XML element. Although '207' is used as the overall response status code, the recipient needs to consult the contents of the multistatus response body for further information about the success or failure of the method execution. The response MAY be used in success, partial success and also in failure situations.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/329596", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/224262/" ] }
329,728
I had a discussion with one of our senior developers who's been in the business for 20 years. He's pretty well known in Ontario for a blog he writes. The strange thing is what he told me: he said that there is a piece of code that is a nightmare to work with because it was written from a textbook, and doesn't account for the real world. Adding a new field to the UI/database/Data layer takes 2-3 hours to do, whereas in his code it takes 30 minutes. The other thing too is that he avoids design patterns because most programmers don't understand them and they are not good from a maintenance perspective. Then there's also the idea that most web developers in Canada prefer to have their data model inherit from the Data Layer classes, rather than keeping it isolated. I asked him, "Isn't it industry standard for the model to be separated from the data layer?" He said sometimes, but most people here prefer not to do that because it's too much work. It sounds like his reasoning for not coding using best practices is because it's a maintenance nightmare, few of our employees understand it (besides myself), and it's slow to work with if you need to push out new features or fields in a few days' time. It's so strange hearing an opinion like this, considering that Stack Overflow mostly encourages people to follow industry standards. Is the problem that we are constantly forced to churn out new fields and features in a matter of days, that it's not possible to deduce a solid pattern that is flexible enough? That seems to be the gist of what I understand from this. What do you make of these statements?
These are the words of someone who has found success and ignores people that try to tell him what to do in pattern jargon that he doesn't understand. Design patterns and best practices are not the same thing. Some people think they are and drive people who know what they're doing nuts. Even if they don't know the proper name for what they are doing. Design patterns existed before they had names. We gave them names to make talking about them easier. A pattern having a name doesn't make it a good thing. It makes it a recognizable thing. This guy is likely using patterns neither one of you ever heard of. That's fine, until you need to talk to him about how something is done. He's either going to have to learn how to talk to you or you're going to have to learn how to talk to him. Has nothing to do with who is "right."
{ "source": [ "https://softwareengineering.stackexchange.com/questions/329728", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/74674/" ] }
329,738
We currently have over 300 web projects that are legacy code, that is kind of spaghetti like based on a modified version of an old framework. Our one developer that knows these systems best is leaving and I'm the replacement, what are the best questions to ask before they leave? I want to make sure I can support all these bespoke apps
These are the words of someone who has found success and ignores people that try to tell him what to do in pattern jargon that he doesn't understand. Design patterns and best practices are not the same thing. Some people think they are and drive people who know what they're doing nuts. Even if they don't know the proper name for what they are doing. Design patterns existed before they had names. We gave them names to make talking about them easier. A pattern having a name doesn't make it a good thing. It makes it a recognizable thing. This guy is likely using patterns neither one of you ever heard of. That's fine, until you need to talk to him about how something is done. He's either going to have to learn how to talk to you or you're going to have to learn how to talk to him. Has nothing to do with who is "right."
{ "source": [ "https://softwareengineering.stackexchange.com/questions/329738", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/244225/" ] }
329,739
This might be a silly question... from somebody looking to make his programming more structured/less chaotic. I would like to develop an application (web and mobile) just for fun/for myself. Before starting writing any code I would like to describe (in words) what the application is going to do and how it is going to look from a user perspective. How do you call this stage of development? Any other keywords that I could google? Are there any tools/recommended styles/... for this or should I just write it down in a few paragraphs?
These are the words of someone who has found success and ignores people that try to tell him what to do in pattern jargon that he doesn't understand. Design patterns and best practices are not the same thing. Some people think they are and drive people who know what they're doing nuts. Even if they don't know the proper name for what they are doing. Design patterns existed before they had names. We gave them names to make talking about them easier. A pattern having a name doesn't make it a good thing. It makes it a recognizable thing. This guy is likely using patterns neither one of you ever heard of. That's fine, until you need to talk to him about how something is done. He's either going to have to learn how to talk to you or you're going to have to learn how to talk to him. Has nothing to do with who is "right."
{ "source": [ "https://softwareengineering.stackexchange.com/questions/329739", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/244224/" ] }
329,806
Accessors and modifiers (aka setters and getters) are useful for three main reasons: They restrict access to the variables. For example, a variable could be accessed, but not modified. They validate the parameters. They may cause some side effects. Universities, online courses, tutorials, blog articles, and code examples on the web are all stressing about the importance of the accessors and modifiers, they almost feel like a "must have" for the code nowadays. So one can find them even when they don't provide any additional value, like the code below. public class Cat { private int age; public int getAge() { return this.age; } public void setAge(int age) { this.age = age; } } That been said, it is very common to find more useful modifiers, those which actually validate the parameters and throw an exception or return a boolean if invalid input has been supplied, something like this: /** * Sets the age for the current cat * @param age an integer with the valid values between 0 and 25 * @return true if value has been assigned and false if the parameter is invalid */ public boolean setAge(int age) { //Validate your parameters, valid age for a cat is between 0 and 25 years if(age > 0 && age < 25) { this.age = age; return true; } return false; } But even then, I almost never see the modifiers been called from a constructor, so the most common example of a simple class I face with is this: public class Cat { private int age; public Cat(int age) { this.age = age; } public int getAge() { return this.age; } /** * Sets the age for the current cat * @param age an integer with the valid values between 0 and 25 * @return true if value has been assigned and false if the parameter is invalid */ public boolean setAge(int age) { //Validate your parameters, valid age for a cat is between 0 and 25 years if(age > 0 && age < 25) { this.age = age; return true; } return false; } } But one would think that this second approach is a lot safer: public class Cat { private int age; public Cat(int age) { //Use the modifier instead of assigning the value directly. setAge(age); } public int getAge() { return this.age; } /** * Sets the age for the current cat * @param age an integer with the valid values between 0 and 25 * @return true if value has been assigned and false if the parameter is invalid */ public boolean setAge(int age) { //Validate your parameters, valid age for a cat is between 0 and 25 years if(age > 0 && age < 25) { this.age = age; return true; } return false; } } Do you see a similar pattern in your experience or is it just me being unlucky? And if you do, then what do you think is causing that? Is there an obvious disadvantage for using modifiers from the constructors or are they just considered to be safer? Is it something else?
Very general philosophical reasoning Typically, we ask that a constructor provide (as post-conditions) some guarantees about the state of the constructed object. Typically, we also expect that instance methods can assume (as pre-conditions) that these guarantees already hold when they're called, and they only have to make sure not to break them. Calling an instance method from inside the constructor means some or all of those guarantees may not yet have been established, which makes it hard to reason about whether the instance method's pre-conditions are satisfied. Even if you get it right, it can be very fragile in the face of, eg. re-ordering instance method calls or other operations. Languages also vary in how they resolve calls to instance methods which are inherited from base classes/overridden by sub-classes, while the constructor is still running. This adds another layer of complexity. Specific examples Your own example of how you think this should look is itself wrong: public Cat(int age) { //Use the modifier instead of assigning the value directly. setAge(age); } this doesn't check the return value from setAge . Apparently calling the setter is no guarantee of correctness after all. Very easy mistakes like depending on initialization order, such as: class Cat { private Logger m_log; private int m_age; public void setAge(int age) { // FIXME temporary debug logging m_log.write("=== DEBUG: setting age ==="); m_age = age; } public Cat(int age, Logger log) { setAge(age); m_log = log; } }; where my temporary logging broke everything. Whoops! There are also languages like C++ where calling a setter from the constructor means a wasted default initialization (which for some member variables at least is worth avoiding) A simple proposal It's true that most code isn't written like this, but if you want to keep your constructor clean and predictable, and still reuse your pre- and post-condition logic, the better solution is: class Cat { private int m_age; private static void checkAge(int age) { if (age > 25) throw CatTooOldError(age); } public void setAge(int age) { checkAge(age); m_age = age; } public Cat(int age) { checkAge(age); m_age = age; } }; or even better, if possible: encode the constraint in the property type, and have it validate its own value on assignment: class Cat { private Constrained<int, 25> m_age; public void setAge(int age) { m_age = age; } public Cat(int age) { m_age = age; } }; And finally, for complete completeness, a self-validating wrapper in C++. Note that although it's still doing the tedious validation, because this class does nothing else , it's relatively easy to check template <typename T, T MAX, T MIN=T{}> class Constrained { T val_; static T validate(T v) { if (MIN <= v && v <= MAX) return v; throw std::runtime_error("oops"); } public: Constrained() : val_(MIN) {} explicit Constrained(T v) : val_(validate(v)) {} Constrained& operator= (T v) { val_ = validate(v); return *this; } operator T() { return val_; } }; OK, it isn't really complete, I've left out various copy and move constructors and assignments.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/329806", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/76933/" ] }
329,829
What is the advantage of using pull requests instead of simply merging a branch into master without one? Particularly on a team where all developers have full access to master.
Pull requests provide for checks and balances, even if anyone can push to master. The biggest advantage is that they provide an opportunity for code review. The person that is responsible for performing the pull can look at the code and tests and make sure that they meet any kind of guidelines that the organization or team has. There are also other reasons for code review - education, finding defects or enhancements, cross-training the team on the system, giving testers a white-box view of the system. If the person that performs the pull is familiar with the architecture of the system, then they can make sure that the changes fit with the architectural vision of the system, especially if the entire team may not have the long-term vision. Developing a habit of using pull requests may also help your team if you decide in the future that the whole team shouldn't have access to master. If your team grows larger, and especially if you have team members who are new to the product and/or new to Git, not giving them access to master can be safer for product integrity.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/329829", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/182925/" ] }
330,304
Today I was watching a " JUnit basics" video and the author said that when testing a given method in your program, you shouldn't use other of your own methods in the process. To be more specific, he was talking about testing some record-creation method that took a name and last name for arguments, and it used them to create records in a given table. But he claimed that in the process of testing this method, he shouldn't use his other DAO methods to query the database to check the final result (to check that record was indeed created with the right data). He claimed that for that, he should write additional JDBC code to query the database and check the result. I think I understand the spirit of his claim: you don't want one method's test case to depend on the correctness of the other methods (in this case, the DAO method), and this is accomplished by writing (again) your own validation/supporting code (which should be more specific and focussed, hence, simpler code). Nonetheless, voices inside my head started protesting with arguments like code duplication, unnecessary additional efforts, etc. I mean, if we run the whole test battery, and we test all our public methods thoroughly (including the DAO method in this case), shouldn't it be OK to just use some of those methods while testing other methods? If one of them is not doing what it's supposed to, then its own test case will fail, and we can fix it and run the test battery again. No need for code duplication (even if the duplicate code is somewhat simpler) or wasted efforts. I have an strong feeling about this because of several recent Excel - VBA applications I've written (properly unit-tested thanks to Rubberduck for VBA ), where applying this recommendation would mean a lot of additional extra work, with no perceived benefit. Can you please share your insights about this?
The spirit of his claim is indeed correct. The point of unit tests is to isolate code, test it free of dependencies, so that any erroneous behavior can be quickly recognized where it is happening. With that said, unit testing is a tool, and it is meant to serve your purposes, it is not an altar to be prayed to. Sometimes that means leaving dependencies in because they work reliably enough and you don't want to bother mocking them, sometimes that means some of your unit tests are actually pretty close if not actually integration tests. Ultimately you're not getting graded on it, what's important is the end product of the software being tested, but you'll just have to be mindful of when you're bending the rules and deciding when the trade-offs are worth it.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/330304", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/225483/" ] }
330,364
recently I came across this blog post from asp.net monsters which talks about issues with using HttpClient in following way: using(var client = new HttpClient()) { } As per the blog post, if we dispose the HttpClient after every request it can keep the TCP connections open. This can potentially lead to System.Net.Sockets.SocketException . The correct way as per the post is to create a single instance of HttpClient as it helps to reduce waste of sockets. From the post: If we share a single instance of HttpClient then we can reduce the waste of sockets by reusing them: namespace ConsoleApplication { public class Program { private static HttpClient Client = new HttpClient(); public static void Main(string[] args) { Console.WriteLine("Starting connections"); for(int i = 0; i<10; i++) { var result = Client.GetAsync("http://aspnetmonsters.com").Result; Console.WriteLine(result.StatusCode); } Console.WriteLine("Connections done"); Console.ReadLine(); } } } I have always disposed HttpClient object after using it as I felt this is the best way of using it. But this blog post now makes me feel I was doing it wrong all this long. Should we create a new single instance of HttpClient for all requests? Are there any pitfalls of using static instance?
It seems like a compelling blog post. However, before making a decision, I would first run the same tests that the blog writer ran, but on your own code. I would also try and find out a bit more about HttpClient and its behavior. This post states: An HttpClient instance is a collection of settings applied to all requests executed by that instance. In addition, every HttpClient instance uses its own connection pool, isolating its requests from requests executed by other HttpClient instances. So what is probably happening when an HttpClient is shared is that the connections are being reused, which is fine if you don't require persistent connections. The only way you're going to know for sure whether or not this matters for your situation is to run your own performance tests. If you dig, you'll find several other resources that address this issue (including a Microsoft Best Practices article), so it's probably a good idea to implement anyway (with some precautions). References You're Using Httpclient Wrong and It Is Destabilizing Your Software Singleton HttpClient? Beware of this serious behaviour and how to fix it Microsoft Patterns and Practices - Performance Optimization: Improper Instantiation Single instance of reusable HttpClient on Code Review Singleton HttpClient doesn't respect DNS changes (CoreFX) General advice for using HttpClient
{ "source": [ "https://softwareengineering.stackexchange.com/questions/330364", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/245170/" ] }
330,365
I have a repository that contains login credentials interface LoginCredentialRepository { LoginCredential fetchCredentials(String username); } For this framework, I'm designating one thread to handle all IO-based tasks. This is only by convention, and if someone really wants to submit an IO-based task to a different Executor I suppose they can. Because I intend on having this designated IO thread, should I change the interface to Callable<LoginCredential> fetchCredentials(String username); So that the caller only has to do ioThread.submit(repository.fetchCredentials("foo")); instead of something like ioThread.submit(() => return repository.fetchCredentials("foo")); On one hand, since I know that the recommended way of processing IO tasks is to submit them to the IO Thread, I feel like I should make the caller's job easier and just have it return what will be used. On the other hand, the repository then knows that when fetchCredentials is called, it won't actually return the credentials, but rather a way of fetching the credentials, requiring the caller to do Future<LoginCredential> credentialFuture = ioThread.submit(repository.fetchCredentials("foo")); LoginCredentials credentials = credentialFuture.get();
It seems like a compelling blog post. However, before making a decision, I would first run the same tests that the blog writer ran, but on your own code. I would also try and find out a bit more about HttpClient and its behavior. This post states: An HttpClient instance is a collection of settings applied to all requests executed by that instance. In addition, every HttpClient instance uses its own connection pool, isolating its requests from requests executed by other HttpClient instances. So what is probably happening when an HttpClient is shared is that the connections are being reused, which is fine if you don't require persistent connections. The only way you're going to know for sure whether or not this matters for your situation is to run your own performance tests. If you dig, you'll find several other resources that address this issue (including a Microsoft Best Practices article), so it's probably a good idea to implement anyway (with some precautions). References You're Using Httpclient Wrong and It Is Destabilizing Your Software Singleton HttpClient? Beware of this serious behaviour and how to fix it Microsoft Patterns and Practices - Performance Optimization: Improper Instantiation Single instance of reusable HttpClient on Code Review Singleton HttpClient doesn't respect DNS changes (CoreFX) General advice for using HttpClient
{ "source": [ "https://softwareengineering.stackexchange.com/questions/330365", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/161992/" ] }
330,419
I recently read the Three Big Lies blog post and I am having a hard time justifying the second lie, which is quoted here: (LIE #2) CODE SHOULD BE DESIGNED AROUND A MODEL OF THE WORLD There is no value in code being some kind of model or map of an imaginary world. I don't know why this one is so compelling for some programmers, but it is extremely popular. If there's a rocket in the game, rest assured that there is a "Rocket" class (Assuming the code is C++) which contains data for exactly one rocket and does rockety stuff. With no regard at all for what data tranformation is really being done, or for the layout of the data. Or for that matter, without the basic understanding that where there's one thing, there's probably more than one. Though there are a lot of performance penalties for this kind of design, the most significant one is that it doesn't scale. At all. One hundred rockets costs one hundred times as much as one rocket. And it's extremely likely it costs even more than that! Even to a non-programmer, that shouldn't make any sense. Economy of scale. If you have more of something, it should get cheaper, not more expensive. And the way to do that is to design the data properly and group things by similar transformations. Here are my problems with this lie in particular. There is value in code being a model / map of an imaginary world as modeling the imaginary world helps (at least me, personally) visualize and organize the code. Having a "Rocket" class is, to me, a perfectly valid choice for a class. Perhaps "Rockets" could be broken down into types of Rockets like AGM-114 Hellfire, etc. which would contain payload strength, max velocity, max turning radius, targeting type and so forth, but still every rocket fired would need to have a position and a velocity. Of course having 100 Rockets costs more than 1 Rocket. If there are 100 Rockets on screen there must be 100 different computations to update their position. The second paragraph sounds like it is making the claim that if there are 100 Rockets, it should cost less than 100 computations to update the state? My problem here is that the author presents a "flawed" programming model but doesn't present a way to "correct" it. Perhaps I'm tripping up on the analogy of the Rocket class, but I would really like to understand the reasoning behind this lie. What is the alternative?
Firstly, let's look at some context: this is a game designer writing on a blog whose subject is eking out the last drop of performance from a Cell BE CPU. In other words: it is about console game programming, more specifically, console game programming for the PlayStation 3. Now, game programmers are a curious bunch, console game programmers even more so, and the Cell BE is a rather strange CPU. (There's a reason Sony went with a more conventional design for the PlayStation 4!) So, we have to look at those statements within this context. There are also some simplifications in that blog post. In particular, this Lie #2 is poorly presented. I would argue that everything that abstracts from the real world is a model in some sense. And since software is not real, but virtual, it is always an abstraction and thus always a model. But! A model doesn't have to have a clean 1:1 mapping onto the real world. That is, after all, what makes it a model in the first place. So, in some sense, the author is clearly wrong: software is a model. Period. In some other sense, he is right: that model doesn't actually have to resemble the real world at all. I will give an example that I already gave in some other answers over the years, the (in)famous Introduction to OO 101 Bank Account example. Here's what a Bank Account looks like in almost every OO class ever: class Account { var balance: Decimal def transfer(amount: Decimal, target: Account) = { balance -= amount target.balance += amount } } So: the balance is data , and transfer is an operation . But! Here's what a Bank Account looks like in almost every banking software ever: class TransactionSlip { val transfer(amount: Decimal, from: Account, to: Account) } class Account { def balance = TransactionLog.filter(t => t.to == this).map(_.amount).sum - TransactionLog.filter(t => t.from == this).map(_.amount).sum } So, now transfer is data and balance is an operation (a left fold over the transaction log). (You'll also notice that TransactionSlip is immutable, balance is a pure function, the TransactionLog can be an append-only "almost" immutable datastructure … I'm sure many of you spotted the glaring concurrency bugs in the first implementation, which now magically go away.) Note that both of these are models. Both of these are equally valid. Both of these are correct. Both of these model the same thing. And yet, they are exactly dual to each other: everything that is data in one model is an operation in the other model, and everything that is an operation in one model is data in the other model. So, the question is not whether you model the "real world" in your code, but how you model it. As it turns out, the second model is actually also how banking works in the real world. As I hinted at above, this second model is mostly immutable and pure, and immune to concurrency bugs, which is actually very important if you consider that there was a time not too long ago, where TransactionSlip s were actual slips of paper that were sent around via horse & carriage. However, the fact that this second model actually matches both how real world banking works and how real world banking software works, does not automatically make it somehow more "right". Because, actually, the first ("wrong") model fairly closely approximates how banking customers view their bank. To them , transfer is an operation (they have to fill out a form), and balance is a piece of data at the bottom of their account statement. So, it may very well be true that in the core game engine code of a high-performance PS3 shooter, there will not be a Rocket type, but still, there will be some modeling of the world going on, even if the model looks weird to someone who is not an expert in the domain of console game physics engine programming.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/330419", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/186842/" ] }
330,428
I m studying DDD these days, and I m having some questions concerning how to manage repositories with DDD. Actually, I have met two possibilies : First one The first way of manage services I've read is to inject a repository and a domain model in an application service. This way, in one of the application service methods, we call a domain service method (checking business rules) and if the condition is good, the repository is called on a special method to persist / retrieve the entity from the database. A simple way of doing this could be : class ApplicationService{ constructor(domainService, repository){ this.domainService = domainService this.repository = repository } postAction(data){ if(this.domainService.validateRules(data)){ this.repository.persist(new Entity(data.name, data.surname)) } // ... } } Second one The second possibility is to inject the repository inside of the domainService instead, and to only use the repository through the domain service : class ApplicationService{ constructor(domainService){ this.domainService = domainService } postAction(data){ if(this.domainService.persist(data)){ console.log('all is good') } // ... } } class DomainService{ constructor(repository){ this.repository = repository } persist(data){ if(this.validateRules(data)){ this.repository.save(new Entity(data.name)) } } validateRules(data){ // returns a rule matching } } From now, I m not able to distinguish which one is the best (if there's one best) or what they imply both in their context. Can you provide me example where one could be better than the other and why ?
The short answer is - you can use repositories from an application service, or a domain service - but it is important to consider why, and how, you are doing so. Purpose of a Domain Service Domain Services should encapsulate domain concepts/logic - as such, the the domain service method: domainService.persist(data) does not belong on a domain service, as persist is not a part of the ubiquitious language and the operation of persistence is not part of the domain business logic. Generally, domain services are useful when you have business rules/logic that require coordinating or working with more than one aggregate. If the logic is only involving one aggregate, it should be in a method on that aggregate's entities. Repositories in Application Services So in that sense, in your example, I prefer your first option - but even there there is room for improvement, as your domain service is accepting raw data from the api - why should the domain service know about the structure of data ?. In addition, the data appears to only be related to a single aggregate, so there is limited value in using a domain service for that - generally I'd put the validation inside the entity constructor. e.g. postAction(data){ Entity entity = new Entity(data.name, data.surname); this.repository.persist(entity); // ... } and throw an exception if it's invalid. Depending on your application framework, it may be simple to have a consistent mechanism for catching the exception and mapping it to the appropriate response for the api type - e.g. for a REST api, return a 400 status code. Repositories in Domain Services Notwithstanding the above, sometimes it is useful to inject and use a repository in a domain service, but only if your repositories are implemented such that they accept and return aggregate roots only, and also where you are abstracting logic that involves multiple aggregates. e.g. postAction(data){ this.domainService.doSomeBusinessProcess(data.name, data.surname, data.otherAggregateId); // ... } the implementation of the domain service would look like: doSomeBusinessProcess(name, surname, otherAggregateId) { OtherEntity otherEntity = this.otherEntityRepository.get(otherAggregateId); Entity entity = this.entityFactory.create(name, surname); int calculationResult = this.someCalculationMethod(entity, otherEntity); entity.applyCalculationResultWithBusinessMeaningfulName(calculationResult); this.entityRepository.add(entity); } Conclusion The key here is that the domain service encapsulates a process that is part of the ubiquitous language. In order to fulfill its role, it needs to use repositories - and it's perfectly fine to do so. But adding a domain service that wraps a repository with a method called persist adds little value. On that basis, if your application service is expressing a use case that calls for only working with a single aggregate, there is no problem using the repository directly from the application service.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/330428", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/160326/" ] }
330,451
I am currently taking AP Computer Science at my high school and while looking at some sample AP test problems I came across one that really confused me. Sample Problem The answer to the problem is letter A but I don't seem to understand why that is the answer or how to even begin to try and solve a problem like this.
The short answer is - you can use repositories from an application service, or a domain service - but it is important to consider why, and how, you are doing so. Purpose of a Domain Service Domain Services should encapsulate domain concepts/logic - as such, the the domain service method: domainService.persist(data) does not belong on a domain service, as persist is not a part of the ubiquitious language and the operation of persistence is not part of the domain business logic. Generally, domain services are useful when you have business rules/logic that require coordinating or working with more than one aggregate. If the logic is only involving one aggregate, it should be in a method on that aggregate's entities. Repositories in Application Services So in that sense, in your example, I prefer your first option - but even there there is room for improvement, as your domain service is accepting raw data from the api - why should the domain service know about the structure of data ?. In addition, the data appears to only be related to a single aggregate, so there is limited value in using a domain service for that - generally I'd put the validation inside the entity constructor. e.g. postAction(data){ Entity entity = new Entity(data.name, data.surname); this.repository.persist(entity); // ... } and throw an exception if it's invalid. Depending on your application framework, it may be simple to have a consistent mechanism for catching the exception and mapping it to the appropriate response for the api type - e.g. for a REST api, return a 400 status code. Repositories in Domain Services Notwithstanding the above, sometimes it is useful to inject and use a repository in a domain service, but only if your repositories are implemented such that they accept and return aggregate roots only, and also where you are abstracting logic that involves multiple aggregates. e.g. postAction(data){ this.domainService.doSomeBusinessProcess(data.name, data.surname, data.otherAggregateId); // ... } the implementation of the domain service would look like: doSomeBusinessProcess(name, surname, otherAggregateId) { OtherEntity otherEntity = this.otherEntityRepository.get(otherAggregateId); Entity entity = this.entityFactory.create(name, surname); int calculationResult = this.someCalculationMethod(entity, otherEntity); entity.applyCalculationResultWithBusinessMeaningfulName(calculationResult); this.entityRepository.add(entity); } Conclusion The key here is that the domain service encapsulates a process that is part of the ubiquitous language. In order to fulfill its role, it needs to use repositories - and it's perfectly fine to do so. But adding a domain service that wraps a repository with a method called persist adds little value. On that basis, if your application service is expressing a use case that calls for only working with a single aggregate, there is no problem using the repository directly from the application service.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/330451", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/245295/" ] }
330,495
Albeit a general question my scope is rather C# as I am aware that languages like C++ have different semantics regarding constructor execution, memory management, undefined behaviour, etc. Somebody asked me an interesting question which was for me not easily answered. Why (or is it at all?) regarded as bad design to let a constructor of a class start a never ending loop (i.e. game loop)? There are some concepts that are broken by this: like the principle of least astonishment, the user does not expect the constructor to behave like this. Unit tests are harder as you cannot create this class or inject it as it never exits the loop. The end of the loop (game end) is then conceptually the time where the constructor finishes, which is also odd. Technically such a class has no public members except the constructor, which makes it harder to understand (especially for languages where no implementation is available) And then there are technical issues: The constructor actually never finishes, so what happens with GC here? Is this object already in Gen 0? Deriving from such a class is impossible or at least very complicated due to the fact that the base constructor never returns Is there something more obviously bad or devious with such an approach?
What is the purpose of a constructor? It returns a newly constructed object. What does an infinite loop do? It never returns. How can the constructor return a newly constructed object if it doesn't return at all? It can't. Ergo, an infinite loop breaks the fundamental contract of a constructor: to construct something.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/330495", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/245342/" ] }
330,769
C++14 seems to have omitted a mechanism for checking whether an std::mutex is locked or not. See this SO question: https://stackoverflow.com/questions/21892934/how-to-assert-if-a-stdmutex-is-locked There are several ways around this, e.g. by using; std::mutex::try_lock() std::unique_lock::owns_lock() But neither of these are particularly satisfying solutions. try_lock() is permitted to return a false negative and has undefined behaviour if the current thread has locked the mutex. It also has side-effects. owns_lock() requires the construction of a unique_lock on top of the original std::mutex . Obviously I could roll my own, but I'd rather understand the motivations for the current interface. The ability to check the status of a mutex (e.g. std::mutex::is_locked() ) does not seem like an esoteric request to me, so I suspect the Standard Committee deliberately omitted this feature rather than it being an oversight. Why? Edit: Ok so maybe this use case isn't as common as I had expected, so I'll illustrate my particular scenario. I have a machine learning algorithm which is distributed on multiple threads. Each thread operates asynchronously, and returns to a master pool once it has completed an optimisation problem. It then locks a master mutex. The thread must then pick a new parent from which to mutate an offspring, but may only pick from parents which do not currently have offspring that are being optimised by other threads. I therefore need to perform a search to find parents that are not currently locked by another thread. There is no risk of the status of the mutex changing during the search, as the master thread mutex is locked. Obviously there's other solutions (I'm currently using a boolean flag) but I thought the mutex offers a logical solution to this problem, as it exists for the purpose of inter-thread synchronization.
I can see at least two severe problems with the suggested operation. The first one was already mentioned in a comment by @ gnasher729 : You can't really reasonably check whether a mutex is locked, because one nanosecond after the check it can get unlocked or locked. So if you wrote if (mutex_is_locked ()) … then mutex_is_locked could return the correct result, but by the time the if is executed, it is wrong. The only way to be sure that the “is currently locked” property of a mutex doesn't change is to, well, lock it yourself. The second problem I see is that unless you lock a mutex, your thread doesn't synchronize with the thread that had previously locked the mutex. Therefore, it isn't even well-defined to speak about “before” and “after” and whether the mutex is locked or not is kind of asking whether Schrödiger's cat is currently alive without attempting to open the box. If I understand correctly, then both problems would be moot in your particular case thanks to the master mutex being locked. But this doesn't seem like a particularly common case to me so I think that the committee did the right thing by not adding a function that might be somewhat useful in very special scenarios and cause damage in all others. (In the spirit of: “Make interfaces easy to use correctly and difficult to use incorrectly.”) And if I may say, I think that the setup you currently have is not the most elegant and could be refactored to avoid the problem altogether. For example, instead of the master thread checking all potential parents for one that is not currently locked, why not maintain a queue of ready parents? If a thread wants to optimize another one, it pops the next one off the queue and as soon as it has new parents, it adds them to the queue. That way, you don't even need the master thread as a coordinator.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/330769", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/139714/" ] }
330,824
I'm building an API, a function that uploads a file. This function will return nothing/void if the file was uploaded correctly and throws an exception when there was some problem. Why an exception and not just false? Because inside an exception I can specify the reason of failure (no connection, missing filename, wrong password, missing file description, etc.). I wanted to build a custom exception (with some enum to help the API user to handle all the errors). Is this a good practice or is it better returning an object (with a boolean inside, an optional error message and the enum for errors)?
Throwing an exception is simply an additional way of making a method return a value. The caller can check for a return value just as easily as catch an exception and check that. Therefore, deciding between throw and return requires other criteria. Throwing exceptions should often be avoided if it endangers the efficiency of your program (constructing an exception object and unwinding the call stack is much more work for the computer than just pushing a value onto it). But if the purpose of your method is to upload a file, then the bottleneck is always going to be the network and file system I/O, so it's pointless to optimize the return method. It's also a bad idea to throw exceptions for what should be a simple control flow (e.g. when a search succeeds by finding its value), because that violates the expectations of API users. But a method failing to fulfill its purpose is an exceptional case (or at least it should be), so I see no reason for not throwing an exception. And if you do that, you might just as well make it a custom, more informative exception (but it's a good idea to make it a subclass of a standard, more general exception like IOException ).
{ "source": [ "https://softwareengineering.stackexchange.com/questions/330824", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/182520/" ] }
330,850
The way I see it, SQL injection attacks can be prevented by: Carefully screening, filtering, encoding input (before insertion into SQL) Using prepared statements / parameterized queries I suppose that there are pros and cons for each, but why did #2 take off and become considered to be more or less the de facto way to prevent injection attacks? Is it just safer and less prone to error or were there other factors? As I understand, if #1 is used properly and all caveats are taken care of, it can be just as effective as #2. Sanitizing, Filtering, and Encoding There was some confusion on my part between what sanitizing , filtering , and encoding meant. I'll say that for my purposes, all of the above can be considered for option 1. In this case I understand that sanitizing and filtering have the potential to modify or discard input data, while encoding preserves data as-is , but encodes it properly to avoid injection attacks. I believe that escaping data can be considered as a way of encoding it. Parameterized Queries vs Encoding Library There are answers where concepts of parameterized queries and encoding libraries that are treated interchangeably. Correct me if I'm wrong, but I am under impression that they are different. My understanding is that encoding libraries , no matter how good they are always have the potential to modify SQL "Program", because they are making changes to the SQL itself, before it is sent off to the RDBMS. Parameterized queries on the other hand, send the SQL program to the RDBMS, which then optimizes the query, defines the query execution plan, selects indexes that are to be used, etc., and then plug in the data, as the last step inside the RDBMS itself. Encoding Library data -> (encoding library) | v SQL -> (SQL + encoded data) -> RDBMS (execution plan defined) -> execute statement Parameterized Query data | v SQL -> RDBMS (query execution plan defined) -> data -> execute statement Historal Significance Some answers mention that historically, parameterized queries (PQ) were created for performance reasons, and before injection attacks that targeted encoding issues became popular. At some point it became apparent that PQ were also pretty effective against injection attacks. To keep with the spirit of my question, why did PQ remain the method of choice and why did it flourish above most other methods when it comes to preventing SQL injection attacks?
The problem is that #1 requires you effectively parse and interpret the entirety of the SQL variant you're working against so you know if it is doing something it shouldn't. And keep that code up to date as you update your database. Everywhere you accept input for your queries. And not screw it up. So yes, that sort of thing would stop SQL injection attacks, but it is absurdly more costly to implement.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/330850", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/119333/" ] }
330,927
The private modifier is used to restrict access outside the class, but using reflection other classes can access private method and fields. So I am wondering how we can restrict accessibility if it is part of requirement.
The purpose of access modifiers is to inform developers writing code about what is the public interface of a class. They are not in any way a security measure and they do not literally hide or secure any information.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/330927", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/245930/" ] }
331,544
I've decided to take upon myself the task of learning functional programming. So far it's been a blast, and I've 'seen the light' as it were. Unfortunately, I don't actually know any functional programmer that I can bounce questions off of. Introducing Stack Exchange. I'm taking a web/software development course, but my instructor isn't familiar with functional programming. He's fine with me using it, and he just asked me to help him understand how it works so he can read my code better. I decided the best way to do this would be by illustrating a simple mathematical function, like raising a value to a power. In theory I could easily do that with a prebuilt function, but that would defeat the purpose of an example. Anyway, I'm having some difficulty figuring out how to hold a value. Since this is functional programming I can't change variable. If I were to code this imperatively, it would look something like this: (The following is all pseudocode) f(x,y) { int z = x; for(int i = 0, i < y; i++){ x = x * z; } return x; } In functional programming, I wasn't sure. This is what I came up with: f(x,y,z){ if z == 'null', f(x,y,x); else if y > 1, f(x*z,y-1,z); else return x; } Is this right? I need to hold a value, z in both cases, but I wasn't sure how to do this in function programming. In theory, the way I did it works, but I wasn't sure if it was 'right'. Is there a better way to do it?
First of all, congratulations on "seeing the light". You've made the software world a better place by expanding your horizons. Second, there is honestly no way a professor who doesn't understand functional programming is going to be able to say anything useful about your code, other than trite comments such as "the indentation looks off". This isn't that surprising in a web development course, as most web development is done using HTML/CSS/JavaScript. Depending on how much you actually care about learning web development, you might want to put in the effort to learn the tools your professor is teaching (painful though it may be - I know from experience). To address the stated question: if your imperative code uses a loop, then chances are your functional code is going to be recursive. (* raises x to the power of y *) fun pow (x: real) (y: int) : real = if y = 1 then x else x * (pow x (y-1)) Note that this algorithm is actually more or less identical to the imperative code. In fact, one could consider the loop above to be syntactic sugar for iterative recursive processes. As a side note, there's no need for a value of z in either your imperative or functional code, in fact. You should have written your imperative function like so: def pow(x, y): var ret = 1 for (i = 0; i < y; i++) ret = ret * x return ret rather than changing the meaning of the variable x .
{ "source": [ "https://softwareengineering.stackexchange.com/questions/331544", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/246851/" ] }
331,653
In a typical web application, dates are retrieved from the database layer strongly typed (e.g. in c# as a System.DateTime as opposed System.String). When a date needs to be expressed as a string (e.g. displayed on a page), the conversion from DateTime to string is done in the presentation tier. Why is this? Why is it a bad thing to convert the DateTime to a string on the database tier? See also the heated debate in chat , and the original question that started all of this .
Dates, DateTimes and really any other typed object, should generally be left in their properly typed format until the moment you need them to be made into some other type - especially when that type is a human readable form, and especially when it's a lossy/one-way sort of conversion. Why? Because it is assumed that the type provides you with lots of handy built in functionality, like proper equality testing, addition and subtraction, comparison (greater than, less than), time zone and locale functionality (especially important for anything time-related), etc. If you decide you want to support Americans and the "Month Day[th], Year" format as well as the common British style of "Day Month Year", or the ISO standard of "Year-Month-Day"? What would you do if it was a string and you needed to make that change, parse it back into a Date? Ugh, no thanks - there are many evils and dastardly bugs that way, which are best avoided entirely. More specifically, you mentioned tiered architecture, which has the presentation layer separate from the data later. This is actually the other big reason to pass a Date as a Date and not a string - because what type of string formatting should the date be put into? English, Chinese, with or without seconds/milliseconds, full month name or digits, will you want to sort on the date field later (sorting on a string demands a certain string format if you want it to work right), etc? This is all a question of presentation - how the user should view the data - and putting that logic anywhere else would limit the advantage of having tiered architecture in the first place. The database should not need to know or care how you'll want to view date in the future. Finally, nearly all complex applications (which is what tiered architectures are for) that care about time will inevitably use times/dates in many, many different ways, and often at all different levels of the architecture. The typed objects related to times and dates exist for a really good reason: time itself, and especially human calendar systems, are weird and hard. Ultimately times and dates are not strings for the same reason that integers and floating points aren't strings, and it will only make your life harder if you try to pretend they are really just arrays of characters, because they just aren't.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/331653", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/115084/" ] }
331,678
When writing some functions, I found a const keyword in parameters like this: void MyClass::myFunction(const MyObject& obj,const string& s1,const string& s2,const string& s3){ } often causes splitting a line into 2 lines in IDE or vim, so I want to remove all const keywords in parameters: void MyClass::myFunction(MyObject& obj,string& s1,string& s2,string& s3){ } is that a valid reason to not using const? Is it maintainable to keep the parameter objects unchanged manually?
Readability is a valid reason to learn to use whitespace: void MyClass::myFunction( const MyObject& obj, const string& s1, const string& s2, const string& s3 ) { return; } Located over there the parameters won't get confused with the body of the function. By locating them on a different line you won't have to reposition them when you change the name of myFunction to something more descriptive. Not changing the parameters position when they haven't changed is something source control diff tool users will appreciate. const means something. Don't throw it out just because you're out of space and ideas. Readability is king but breaking things in it's name is just giving up.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/331678", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/196142/" ] }
331,692
I'm trying to simulate an elevator, as always I started very simple by taking only a single order at a time, then added memory to the elevator in the form of queues so that floors are traveled in the order in which they were pressed, which obviously isn't the best approach. So at the moment I'm using a very simple and "short sighted" logic which is, for the current floor find the floor closest to me and set it as my next destination and loop till no more floors are in the list. But this doesn't always work, for example the elevator was in the 3rd floor of a 5 floor building and got orders 4,5,2 the shortest path would be 2->4->5 which costs 4 floors but using this logic 4->5->2 which costs 5 has the same chance of being picked, depending on the code. How do I find the shortest path and make the elevator more efficient?
"Efficiency" is not the most important feature, the most important is to make sure every order is followed, that there is no starvation. If someone presses 100 and people keep pressing 1 and 2 it may be efficient to keep going between those floors, but it'd be nice for 100 to be visited at some point. I think (from personal observation when I was interested in figuring out) that most of them do: Start going in the direction of the first button pressed, keep track of which direction we're going When a floor is reached and that button was pressed, stop and open the doors, mark the buttons for this floor as not pressed anymore. If there are still more floors that we need to visit that are in the same direction, keep going in that direction . If not and there are still floors we need to visit, move in that direction. If not then we're done and will start at 1 when a button is pressed again. Note that many elevators have buttons "I want to go up" and "I want to go down" next to the doors instead of a single button. The algorithm only needs a small change: in 2, if the only button pressed for that floor is one of the buttons next to the door, only stop and open the doors if we are going in that direction. Possibly keep the button pressed if the doors open because of a button pressed inside the elevator and it is going in the wrong direction. You never have to figure out an entire path , just in which direction to go next.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/331692", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/236742/" ] }
331,864
Today I had an interesting discussion with a colleague. I am a defensive programmer. I believe that the rule " a class must ensure that its objects have a valid state when interacted with from outside the class " must always be adhered to. The reason for this rule is that the class does not know who its users are and that it should predictably fail when it is interacted with in an illegal manner. In my opinion that rule applies to all classes. In the specific situation where I had a discussion today, I wrote code which validates that the arguments to my constructor are correct (e.g. an integer parameter must be > 0) and if the precondition is not met, then an exception gets thrown. My colleague on the other hand believes that such a check is redundant, because unit tests should catch any incorrect uses of the class. Additionally he believes that defensive programming validations should also be unit tested, so defensive programming adds much work and is therefore not optimal for TDD. Is it true that TDD is able to replace defensive programming? Is parameter validation (and I don't mean user input) unnecessary as a consequence? Or do the two techniques complement each other?
That's ridiculous. TDD forces code to pass tests and forces all code to have some tests around it. It doesn't prevent your consumers from incorrectly calling code, nor does it magically prevent programmers missing test cases. No methodology can force users to use code correctly. There is a slight argument to be made that if you perfectly did TDD you would have caught your > 0 check in a test case, prior to implementing it, and addressed this -- probably by you adding the check. But if you did TDD, your requirement (> 0 in constructor) would first appear as a testcase that fails. Thus giving you the test after you add your check. It is also reasonable to test some of the defensive conditions (you added logic, why wouldn't you want to test something so easily testable?). I'm not sure why you seem to disagree with this. Or do the two techniques complement each other? TDD will develop the tests. Implementing parameter validation will make them pass.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/331864", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/233992/" ] }
332,120
Added Just found two related questions https://math.stackexchange.com/q/1759680/1281 https://stackoverflow.com/a/2582804/156458 In programming languages, from Michael Scott's Programming Language Pragmatics In general, a value in a programming language is said to have first-class status if it can be passed as a parameter, returned from a subroutine, or assigned into a variable. Simple types such as integers and characters are first-class values in most programming languages. By contrast, a “second-class” value can be passed as a parameter, but not returned from a subroutine or assigned into a variable, and a “third-class” value cannot even be passed as a parameter. Labels are third-class values in most programming languages, but second-class values in Algol. Subroutines display the most variation. They are first-class values in all functional programming languages and most scripting languages. They are also first-class values in C# and, with some restrictions, in several other imperative languages, including Fortran, Modula-2 and -3, Ada 95, C, and C++. 11 They are second-class values in most other imperative languages, and third-class values in Ada 83. What is the mathematics foundation for first/second/third class values in programming languages? The terminology reminds me of first/second order logic, but are they related? It seems to me that the difference between them is which specific case a value can be used passed as a parameter, returned from a subroutine, or assigned into a variable. Why are the specific cases important, while not other cases not mentioned? Thanks.
There isn't any, and it's pretty arbitrary. The only useful distinction is between first class, and all others. Every case that's in the "other" bracket has its own distinct set of rules in each case and lumping them all together just isn't very helpful. "First class" means "You don't have to look up the rules", essentially, and "other" is "You have to learn the rules". For instance in C++ individual functions are first class values, as long as they're stateless. Overload sets are not but lambdas are. In C# functions are generally first-class values but there's some awkward cases that arise when dealing with type inference that prevent them from being in all cases.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/332120", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/699/" ] }
332,371
I am designing my own little OOP program to simulate Vampires, Wolves, Humans and Trucks and am trying to implement my own limited understanding of Interfaces. ( I am still abstracting here and have no code implementation yet, so it's rather a question of OOP design...I think!) Am I right in looking for 'common behaviour' between these classes and implementing them as interfaces ? For example, Vampires and Wolves bite...so should I have a bite interface? public class Vampire : Villain, IBite, IMove, IAttack Likewise for Trucks... public class Truck : Vehicle, IMove And for Humans... public class Man : Human, IMove, IDead Is my thinking right here? (Appreciate your help)
In general you want to have interfaces for common characteristics of your clasess. I semi-agree with @Robert Harvey in the comments, who said that usually interfaces represent more abstract features of classes. Nevertheless, I find starting from more concrete examples a good way of starting to think abstract. While your example is technically correct (i.e. yes, both vampires and wolves bite, so you can have an interface for that), there is a question of relevance. Each object has thousands of characteristics (e.g. animals may have fur, can swim, can climb trees, and so on). Will you make an interface for all of them? Very less likely. You usually want interfaces for things that make sense to be grouped in an application as a whole. For example, if you are building a game, you can have an array of IMove objects and update their position. If you don't want to do that, having the IMove interface is pretty useless. The point is, don't over engineer. You need to think about how are you going to use that interface, and 2 classes having a method in common is not a good enough reason to create an interface.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/332371", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/207419/" ] }
332,456
This is more of a nomenclature (technical writing) rather than a purely technical question. I am trying to write a refactoring proposal (and get it assigned to myself) centered around expanding dependency injection in our application. While we do use Spring for autowiring beans, there are still instances that instantiate beans using MyClass obj = new MyClass(...) , which could totally be injected. I would like to make my proposal use elegant nomenclature and refer to the design pattern opposite of DI with a proper term. Is "tight coupling" an adequate term that stands as an antonym to DI?
No. Tight coupling is much more than what dependency injection deals with. Dependency injection externalizes a decision of implementation. This goes a long way to decouple but coupling is more than just this. A good antonym for dependency injection is hard coding a dependency . When you construct (use new or directly use some factory) inside a behavior object you've smushed together two different concerns. A service locator helps decouple but leaves you coupled to the service locator itself. Coupling is more than just separating construction and behavior . If I have 101 methods that have to be called in some particular order from class A to class B I'm tightly coupled. Doesn't matter how wonderfully separated construction and behavior are. Coupling is a measure the interdependence of the two objects. Anything that contributes to making it difficult to make changes in one without impacting the other is contributing to coupling. Dependency injection helps with this but it is not all of this.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/332456", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/66237/" ] }
332,757
Logging is something that is necessary but is (relatively) rarely used. As such it can be made much more compact in terms of storage. For example the data most commonly logged like ip, date, time and other data that can be represented as an integer is being stored as text. If logging was stored as binary data, a lot of space could be preserved thus requiring less rotation and increasing disk lifespan, especially with SSDs where writes are limited. Some may say that it is such a minor issue that it does not really matter, but taking in consideration the effort needed to build such mechanism it makes no sense not to. Anyone can make this for like two days in his spare time, why don't people do this?
systemd famously stores its log files in binary format. The main issues I have heard with it are: if the log gets corrupted it's hard to recover as it needs specialist tooling they are not human readable, so you can't use standard tools such as vi , grep , tail etc to analyse them The main reason for using a binary format (to my knowledge) was that it was deemed easier for creating indices etc i.e. to treat it more like a database file. I would argue that the disk space advantage is relatively small (and diminishing) in practice. If you want to store large amounts of logging then zipping rolled logs is really quite efficient. On balance, the advantages of tooling and familiarity probably would err on the side of text logging in most cases.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/332757", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/143301/" ] }
332,806
My understanding from reading threads like this one is that the point of ems is to define all measurements in your webpage, by the base font size , which can be set by your browser. For example, in Chrome you can do this by going settings -> show advanced settings -> web content -> font size: very large . I might do this if I were using a large, high resolution monitor, that was far away. I created a plunker that demonstrates the difference between ems and px in sizing. #div1 { width: 320px } #div2 { width: 20em; } If in my browser I set font size to medium, these divs will be the same size, base font size being 16px, so 20em = 320px. However, when I change my browser font size up to very large, we can see that the div measured in ems has increased size. However, this effect will be negated if I define the font size in the body tag for example. body { font-size: 16px; } Because now my css is overriding the font size set by the browser. I get that ems would have been important in the days of older browsers, where zooming on the page would only scale up fonts. But these days modern browsers scale up both pixels and fonts, rendering the zoom issue moot. Looking around the web - a lot of websites do set font size in their body tag. Stack Overflow for example, sets font-size to 13px in the body tag. Setting font size in my browser doesn't affect the layout of Stack Overflow in the slightest. Google search results doesn't do this. (both these screenshots taken with chrome font size set at very large, and 100% zoom). So perhaps you could argue that setting font size in the body tag, is a bad idea because it prevents a user's own accessibility settings. But given that the user can zoom to increase the sizes (which will proportionally increase all the pixels too) - this doesn't seem like a real problem.
However, this effect will be negated if I define the font size in the body tag for example. body { font-size: 16px; } Because now my css is overriding the font size set by the browser. You are forgetting about accessibility . See C14: Using em units for font sizes from Web Content Accessibility Guidelines (WCAG 2.0). Set the font size option of your browser to Very large . Now go to a website such as W3C . How the text looks like? This is what this option is about. Developers of StackExchange and a lot of other websites decided that their choices are more important that the choices of users who may be visually impaired. Whether this choice is right or wrong is outside the scope of this question, and the answer is not necessary straight. You, however, may be forced (including by law for some websites) to make different decisions, and to let the users select their font size. From there, you have a choice: either you specify the size of the images and different zones of the page in pixels, in which case the font size option of the browser will have fun effects on the layout, or you use em for those elements as well. Also note that if you don't specify the font size in pixels at the body level, then stick with em at least for font size everywhere. For instance, Google specified the font size inconsistently, and their search page looks weird, especially with titles appearing smaller than ordinary text. Wikipedia, on the other hand, did an excellent job by using em whenever the font size was specified. Below are the screenshots of both sites rendered by Chromium with font size option set to Very large : So perhaps you could argue that setting font size in the body tag, is a bad idea because it prevents a user's own accessibility settings. But given that the user can zoom to increase the sizes (which will proportionally increase all the pixels too) - this doesn't seem like a real problem. This is a real problem for two reasons. First, most browsers store the zoom factor site by site. If you have a visual impairment, you'll be forced to zoom, again and again, on every site you visit. That sucks. Given the current indifference of web designers and developers towards accessibility, visually impaired persons have to do it anyway, but this doesn't mean it should stay this way. Second, too many websites won't behave well visually when zoomed either. Responsive websites do well, but non-responsive ones will exhibit either horizontal scroll or some sort of “your screen is too small” behavior. There is also a fundamental difference between zooming and text scale for a developer. While em means “adjust that to the text size preferences of the user”, zoom is about scaling the whole page up or down, not just text. Take an example of the site menu (at the top of the page, with links being positioned in a line). With zooming, you know that the height of the menu will change proportionally to the other elements. With text size option, it's up to you to determine the behavior. You may assume that a visually impaired person will be able to see your 50px -height menu in all cases: nobody will miss a black menu on white background. Or you may decide that its height should stay proportional to the text, for instance 1.5em .
{ "source": [ "https://softwareengineering.stackexchange.com/questions/332806", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/109776/" ] }
332,820
This question is inspired by a question about final in java . In C/C++, should I use const whenever possible? I know there is already a related question about using const in parameters . Unfortunately that question and it's answers don't fully answer my question, because it's only about function parameters, but I would also like to know about other cases (eg: local variables). Also, almost all answers to that question say we should use const because it contains useful information about the accessibility of variables. But this seems to conflict with an answer about using final in Java which states final may be superfluous if it doesn't contain extra information and so it should be omitted to keep the code short and clean. So, should I use const whenever possible? If so, why is the advice for const in C++ different from the advice for final in Java?
First of all, since you referenced Java's final , that is a totally different beast than const . Final means the reference cannot change, but says nothing about mutability. Const goes further by saying "a const reference cannot mutate" which is a much stronger guarantee. To do this in Java, internal state must be final and determined at construction time. Const is a lot easier to use, and an existing object can be "promoted" into a const reference. Yes, you should use const whenever possible. It makes a contract that your code will not change something. Remember, a non-const variable can be passed in to a function that accepts a const parameter. You can always add const, but not take it away (not without a const cast which is a really bad idea). Const-correctness may be tedious at times, but it helps guarantee immutability. This is crucial in multi-threaded code where threads share objects. It makes certain tasks more efficient: instead of copying state, just reuse the same immutable object. Libraries might accept const parameters in order to provide a guarantee to the programmer that no, your object will not change in an unpredictable way in the black hole that is the library's guts.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/332820", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/248528/" ] }
332,892
In many languages, the syntax function_name(arg1, arg2, ...) is used to call a function. When we want to call the function without any arguments, we must do function_name() . I find it strange that a compiler or interpreter would require () in order to actually detect it as a function call. If something is known to be callable, why wouldn't function_name; be enough? On the other hand, in some languages we can do: function_name 'test'; or even function_name 'first' 'second'; to call a function or a command. I think parentheses would have been better if they were only needed to declare the order of priority, and in other places were optional. For example, doing if expression == true function_name; should be as valid as if (expression == true) function_name(); . An especially interesting case is writing 'SOME_STRING'.toLowerCase() when clearly no arguments are needed by the prototype function. Why did the designers decide against the simpler 'SOME_STRING'.lower design? Disclaimer: Don't get me wrong, I quite love the C-like syntaxes! I'm just asking for the reasoning behind it. Does requiring () have any actual advantages, or does it simply make the code more human readable?
For languages that use first-class functions , its quite common that the syntax of referring to a function is: a = object.functionName while the act of calling that function is: b = object.functionName() a in the above example would be reference to the above function (and you could call it by doing a() ), while b would contain the return value of the function. While some languages can do function calls without parenthesis, it can get confusing whether they are calling the function, or simply referring to the function.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/332892", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/188871/" ] }
333,317
I've been doing some functional JavaScript. I had thought that Tail-Call Optimization had been implemented, but as it turns out I was wrong. Thus, I've had to teach myself Trampolining . After a bit of reading here and elsewhere, I was able to get the basics down and constructed my first trampoline: /*not the fanciest, it's just meant to reenforce that I know what I'm doing.*/ function loopy(x){ if (x<10000000){ return function(){ return loopy(x+1) } }else{ return x; } }; function trampoline(foo){ while(foo && typeof foo === 'function'){ foo = foo(); } return foo; /*I've seen trampolines without this, mine wouldn't return anything unless I had it though. Just goes to show I only half know what I'm doing.*/ }; alert(trampoline(loopy(0))); My biggest issue, is I don't know why this works. I get the idea of rerunning the function in a while loop instead of using a recursive loop. Except, technically my base function already has a recursive loop. I'm not running the base loopy function, but I am running the function inside of it. What's stopping foo = foo() from causing a stack overflow? And isn't foo = foo() technically mutating, or am I missing something? Perhaps it's just a necessary evil. Or some syntax I'm missing. Is there even a way to understand it? Or is it just some hack that somehow works? I've been able to make my way through everything else, but this one has me befuzzled.
The reason your brain is rebelling against the function loopy() is that it is of an inconsistent type : function loopy(x){ if (x<10000000){ return function(){ // On this line it returns a function... // (This is not part of loopy(), this is the function we are returning.) return loopy(x+1) } }else{ return x; // ...but on this line it returns an integer! } }; Quite a lot of languages don't even let you do things like this, or at least demand a lot more typing to explain just how this is supposed to make any kind of sense. Because it really doesn't. Functions and integers are totally different kinds of objects. So let's go through that while loop, carefully: while(foo && typeof foo === 'function'){ foo = foo(); } Initially, foo is equal to loopy(0) . What is loopy(0) ? Well, it's less than 10000000, so we get function(){return loopy(1)} . That's a truthy value, and it's a function, so the loop keeps going. Now we come to foo = foo() . foo() is the same as loopy(1) . Since 1 is still less than 10000000, that returns function(){return loopy(2)} , which we then assign to foo . foo is still a function, so we keep going... until eventually foo is equal to function(){return loopy(10000000)} . That's a function, so we do foo = foo() one more time, but this time, when we call loopy(10000000) , x is not less than 10000000 so we just get x back. Since 10000000 is also not a function, this ends the while loop as well.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/333317", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/246851/" ] }
333,487
I'm currently developing a web application for government land planning. The application runs mostly in the browser, using ajax to load and save data. I will do the initial development, and then graduate (it's a student job). After this, the rest of the team will add the occasional feature as needed. They know how to code, but they're mostly land-planning experts. Considering the pace at which Javascript technologies change, how can I write code that will still work 20 years from now? Specifically, which libraries, technologies, and design ideas should I use (or avoid) to future-proof my code?
Planning software for such a lifespan is difficult, because we don't know what the future holds. A bit of context: Java was published 1995, 21 years ago. XmlHttpRequest first became available as a proprietary extension for Internet Explorer 5, published 1999, 17 years ago. It took about 5 years until it became available across all major browsers. The 20 years you are trying to look ahead are just about the time rich web applications have even existed. Some things have certainly stayed the same since then. There has been a strong standardization effort, and most browsers conform well to the various standards involved. A web site that worked across browsers 15 years ago will still work the same, provided that it worked because it targeted the common subset of all browsers, not because it used workarounds for each browser. Other things came and went – most prominently Flash. Flash had a variety of problems that led to its demise. Most importantly, it was controlled by a single company. Instead of competition inside the Flash platform, there was competition between Flash and HTML5 – and HTML5 won. From this history, we can gather a couple of clues: Keep it simple: Do what works right now, without having to use any workarounds. This behaviour will likely stay available long into the future for backwards-compatibility reasons. Avoid reliance on proprietary technologies, and prefer open standards. The JavaScript world today is relatively volatile with a high flux of libraries and frameworks. However, nearly none of them will matter in 20 years – the only “framework” I'm certain that will still be used by then is Vanilla JS . If you want to use a library or tool because it really makes development a lot easier, first make sure that it's built on today's well-supported standards. You must then download the library or tool and include it with your source code. Your code repository should include everything needed to get the system runnable. Anything external is a dependency that could break in the future. An interesting way to test this is to copy your code to a thumb drive, go to a new computer with a different operating system, disconnect it from the internet, and see whether you can get your frontend to work. As long as your project consists of plain HTML+CSS+JavaScript plus perhaps some libraries, you're likely going to pass.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/333487", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/61422/" ] }
333,643
Background I am designing a language, as a side project. I have a working assembler, static analyser, and virtual machine for it. Since I can already compile and run non-trivial programs using the infrastructure I've built I thought about giving a presentation at my university. During my talk I mentioned that the VM provides a type system, was asked " What is your type system for? ". After answering I got laughed at by the person asking the question. Thus, even though I am almost certainly going to lose reputation for asking this question, I turn to Programmers. My understanding As I understand them, type systems are used to provide additional layer of information about entities in a program, so that the runtime, or the compiler, or any other piece of machinery, knows what to do with the strings of bits it operates on. They also help maintain contracts - the compiler (or code analyser, or runtime, or any other program) can verify that at any given point the program operates on values programmers expect it to operate on. Types can also be used to provide information to those human programmers. For example, I find this declaration: function sqrt(double n) -> double; more useful than this one sqrt(n) The former gives plenty of information: that the sqrt identifier is a function, takes a single double as input, and produces another double as output. The latter tells you that it is probably a function taking a single parameter. My answer So, after being asked "What is your type system for?" I answered as follows: The type system is dynamic (types are assigned to values, not to variables holding them), but strong without surprising coercion rules (you can't add string to integer as they represent incompatible types, but you can add integer to floating point number). The type system is used by the VM to ensure that operands for instructions are valid; and can be used by programmers to ensure that parameters passed to their functions are valid (i.e. of correct type). The type system supports subtyping and multiple inheritance (both features are available to programmers), and types are considered when dynamic dispatch of methods on objects is used - VM uses types to check by what function is a given message implemented for given type. The follow-up question was "And how is type assigned to a value?". So I explained that all values are boxed, and have a pointer pointing to a type definition structure which provides information about name of the type, what messages it responds to, and what types it inherits from. After that, I got laughed at, and my answer was dismissed with the comment "That is not a real typesystem.". So - if what I described does not qualify as a "real typesystem", what would? Was that person right that what I provide cannot be considered a typesystem?
That all seems like a fine description of what type systems provide. And your implementation sounds like a reasonable enough one for what it's doing. For some languages, you won't need the runtime information since your language doesn't do runtime dispatch (or you do single dispatch via vtables or another mechanism, so don't need the type information). For some languages, just having a symbol/placeholder is sufficient since you only care about type equality, not its name or inheritance. Depending on your environment, the person may have wanted more formalism in your type system. They want to know what you can prove with it, not what programmers can do with it. This is pretty common in academia unfortunately. Though academics do such things because it's pretty easy to have flaws in your type system that allow things to escape correctness. It's possible they spotted one of these. If you had further questions, Types and Programming Languages is the canonical book on the subject and can help you to learn some of the rigor needed by academics, as well as some of the terminology to help describe things.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/333643", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/200203/" ] }
333,755
I'm designing an application using Micro-Services and I'm unsure on the best mechanism to use to collect data from multiple services. I believe there are two options: Integrate an 'inter-service' communication mechanism that allows the services to talk directly. The API Gateway would call an individual service, which then calls other services to collect data, before returning the consolidated response to the API Gateway. The API then returns the response to the caller. (This would have to be synchronous calls when the call to serviceB requires the response from serviceA. I.E Separate Person and Address Services.) Have the API Gateway call each service directly and consolidate the data within the API before returning the response. I'm leaning towards the second option, as having the services talk to each other would introduce coupling, in which case I might as well just architect a monolithic application. However, there are a few serious drawbacks that I can think of off the top of my head with this option: Having the API executes multiple calls to multiple services increases the load on the API server, especially when some of those calls are blocking. This method would mean the API has to be aware of what the application is trying to do (I.E Logic would have to be programmed into the API to handle calling the services in turn, and then to consolidate the data), rather than just act as a dumb 'endpoint' for the micro-services. I'd like to know what the standard approach to this problem is and if there is another third option that I'm missing?
I would generally advise against having microservices do synchronous communication with each other, the big issue is coupling, it means the services are now coupled to each other, if one of them fails the second is now fully or partially disfunctional. I would make a clear distinction between state changing operations and read operations (CQS Command Query Separation ). For state changing operations i would use some kind of messaging infrastructure and go for fire and forget. For queries you would use Synchronous request response communication and could use an http API or just go directly to your data store. If you are using messaging then you can also look at publish subscribe for raising events between services. Another point to consider is (transactional) data sharing (as opposed to read only views) if you expose your internal state the reader might get the wrong state of your data, or the wrong version, and also will potentially lock your data? Last but not least, try to do everything you can to keep your services autonomous (at least at the logical level). Hope this makes sense.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/333755", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/143983/" ] }
333,799
Imagine you are creating a video player in JavaScript. This video player loops the user's video repeatedly. Each time a new loop begins the player runs a recursive function that calls itself N times, N being the number of times the video has looped, and because of that the browser will trigger a too much recursion RangeError at some time. Probably no one will use the loop feature that much. Your application will never throw this error, not even if the user left the application looping for a week, but it still exists. Solving the problem will require you to redesign the way looping works in your application, which will take a considerable amount of time. What do you do? Why? Fix the bug Leave the bug Shouldn't you only fix bugs people will stumble in? When does bugfixing become overkill, if it ever does?
You have to be pragmatic. If the error is unlikely to be triggered in the real world and the cost to fix is high, I doubt many people would consider it a good use of resources to fix. On that basis I'd say leave it but ensure the hack is documented for you or your successor in a few months (see last paragraph). That said, you should use this issue as a "learning experience" and the next time you do looping do not use a recursive loop unnecessarily. Also, be prepared for that bug report. You'd be amazed how good end users are at pushing against the boundaries and uncovering defects. If it does become an issue for end users, you're going to have to fix it - then you'll be glad you documented the hack.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/333799", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/250194/" ] }
333,853
After some serious quality problems in the last year, my company has recently introduced code reviews. The code review process was quickly introduced, without guidelines or any kind of checklist. Another developer and I where chosen to review all changes made to the systems, before they are merged into the trunk. We were also chosen as "Technical Lead". This means we are responsible for code quality, but we don't have any authority to implement changes in the process, reassign developers, or hold back projects. Technically we can deny the merge, giving it back to development. In reality this ends almost always with our boss demanding that it be shipped on time. Our manager is an MBA who is mostly concerned with creating a schedule of upcoming projects. While he is trying, he has almost no idea what our software does from a business point of view, and is struggling to understand even the most basic customer demands without explanation from a developer. Currently development is done in development branches in SVN, after the developer thinks he is ready, he reassigns the ticket in our ticketing system to our manager. The manager then assigns it to us. The code reviews have lead to some tensions within our team. Especially some of the older members question the changes (I.e. "We always did it like this" or "Why should the method have a sensible name, I know what it does?"). After the first few weeks my colleague started to let things slide, to not cause trouble with the co-workers (she told me herself, that after a bug report was filed by a customer, that she knew of the bug, but feared that the developer would be mad at her for pointing it out). I, on the other hand, am now known for being an ass for pointing out problems with the committed code. I don't think that my standards are too high. My checklist at the moment is: The code will compile. There is at least one way the code will work. The code will work with most normal cases. The code will work with most edge cases. The code will throw reasonable exception if inserted data is not valid. But I fully accept the responsibility of the way I give feedback. I'm already giving actionable points explaining why something should be changed, sometimes even just asking why something was implemented in a specific way. When I think it is bad, I point out that I would have developed it in another way. What I'm lacking is the ability to find something to point out as "good". I read that one should try to sandwich bad news in good news. But I'm having a hard time to find something that is good. "Hey this time you actually committed everything you did" is more condescending than nice or helpful. Example Code Review Hey Joe, I have some questions about your changes in the Library\ACME\ExtractOrderMail Class. I didn't understand why you marked "TempFilesToDelete" as static? At the moment a second call to "GetMails" would throw an exception, because you add Files to it but never remove them, after you deleted them. I know that the the function is just called once per run, but in the future this might change. Could you just make it an instance variable, then we could have multiple objects in parallel. ... (Some other points that don't work) Minor points: Why does "GetErrorMailBody" take an Exception as Parameter? Did I miss something? You are not throwing the exception, you just pass it along and call "ToString". Why is that? SaveAndSend Isn't a good name for the Method. This Method sends error mails if the processing of a mail went wrong. Could you rename it to "SendErrorMail" or something similar? Please don't just comment old code, delete it outright. We still have it in subversion.
How to find positive things in a code review? After some serious quality problems in the last year, my company has recently introduced code reviews. Great, you have a real opportunity to create value for your firm. After the first few weeks my colleague started to let things slide, to not cause trouble with the co-workers (she told me herself, that after a bugreport was filed by a customer, that she knew of the bug, but feared that the developer would be mad at her for pointing it out). Your coworker should not be doing code review if she can't handle telling developers what's wrong with their code. It's your job to find problems and get them fixed before they affect customers. Likewise, a developer who intimidates coworkers is asking to be fired. I've felt intimidated after a code-review - I told my boss, and it was handled. Also, I like my job, so I kept up the feedback, positive and negative. As a reviewer, that's on me, not anyone else. I, on the other hand, am now known for being an ass for pointing out problems with the committed code. Well, that's unfortunate, you say you're being tactful. You can find more to praise, if you have more to look for. Critique the code, not the author You give an example: I have some questions about your changes in Avoid using the words "you" and "your", say, "the" changes instead. Did I miss something? [...] Why is that? Don't add rhetorical flourishes to your critiques. Don't make jokes, either. There's a rule I've heard, "If it makes you feel good to say, don't say it, it's no good." Maybe you're buffing your own ego at someone else's expense. Keep it to just the facts. Raise the bar by giving positive feedback It raises the bar to praise your fellow developers when they meet higher standards. So that means the question, How to find positive things in a code review? is a good one, and worth addressing. You can point out where the code meets ideals of higher level coding practices. Look for them to follow best practices, and to keep raising the bar. After the easier ideals become expected of everyone, you'll want to stop praising these and look for even better coding practices for praise. Language specific best practices If the language supports documentation in code, namespaces, object-oriented or functional programming features, you can call those out and congratulate the author on using them where appropriate. These matters usually fall under style-guides: Does it meet in-house language style guide standards? Does it meet the most authoritative style guide for the language (which is probably more strict than in-house - and thus still compliant with the in-house style)? Generic best practices You could find points to praise on generic coding principles, under various paradigms. For example, do they have good unittests? Do the unittests cover most of the code? Look for: unit tests that test only the subject functionality - mocking expensive functionality that is not intended to be tested. high levels of code coverage, with complete testing of APIs and semantically public functionality. acceptance tests and smoke tests that test end-to-end functionality, including functionality that is mocked for unit tests. good naming, canonical data points so code is DRY (Don't Repeat Yourself), no magic strings or numbers. variable naming so well done that comments are largely redundant. cleanups, objective improvements (without tradeoffs), and appropriate refactorings that reduce lines of code and technical debt without making the code completely foreign to the original writers. Functional Programming If the language is functional, or supports the functional paradigm, look for these ideals: avoiding globals and global state using closures and partial functions small functions with readable, correct, and descriptive names single exit points, minimizing number of arguments Object Oriented Programming (OOP) If the language supports OOP, you can praise the appropriate usage of these features: encapsulation - provides a cleanly defined and small public interface, and hides the details. inheritance - code reused appropriately, perhaps through mixins. polymorphism - interfaces are defined, perhaps abstract base classes, functions written to support parametric polymorphism. under OOP, there are also SOLID principles (maybe some redundancy to OOP features): single responsibility - each object has one stakeholder/owner open/closed - not modifying the interface of established objects Liskov substitution - subclasses can be substituted for instances of parents interface segregation - interfaces provided by composition, perhaps mixins dependency inversion - interfaces defined - polymorphism... Unix programming principles : Unix principles are modularity, clarity, composition, separation, simplicity, parsimony, transparency, robustness, representation, least surprise, silence, repair, economy, generation, optimization, diversity, and extensibility. In general, these principles can be applied under many paradigms. Your criteria These are far too trivial - I would feel condescended to if praised for this: The code will compile. There is at least one way the code will work. The code will work with most normal cases. On the other hand, these are fairly high praise, considering what you seem to be dealing with, and I wouldn't hesitate to praise developers for doing this: The code will work with most edge cases. The code will throw reasonable exception if inserted data is not valid. Writing down rules for passing code review? That's a great idea in theory, however, while I wouldn't usually reject code for bad naming, I've seen naming so bad that I would reject the code with instructions to fix it. You need to be able to reject the code for any reason. The only rule I can think of for rejecting code is there's nothing so egregious that I would keep it out of production. A really bad name is something that I would be willing to keep out of production - but you can't make that a rule. Conclusion You can praise best practices being followed under multiple paradigms, and probably under all of them, if the language supports them.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/333853", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/215255/" ] }
333,863
Say I have a function that takes some arguments that fall under a bunch of constraints, e.g. one of them has to be an integer, another has to be a string of length at least 10, et cetera. In general, is it the function writer's (i.e. my) responsibility to have a bunch of assert s or whatever to enforce these preconditions, and throw errors if any are violated? Or should I just let the function proceed anyway and have the caller deal with any errors or weird output they get from it? My reasoning is that on one hand, it's simpler in terms of debugging if an error is thrown immediately when a precondition is violated, but on the other hand, it might be easier overall for the caller to read the documentation of the function rather than me padding it with a whole load of assert s and if -statements. As a side question, if I'm writing functions that only I will be using, is the answer the same, or do I just do whatever I feel most comfortable with?
How to find positive things in a code review? After some serious quality problems in the last year, my company has recently introduced code reviews. Great, you have a real opportunity to create value for your firm. After the first few weeks my colleague started to let things slide, to not cause trouble with the co-workers (she told me herself, that after a bugreport was filed by a customer, that she knew of the bug, but feared that the developer would be mad at her for pointing it out). Your coworker should not be doing code review if she can't handle telling developers what's wrong with their code. It's your job to find problems and get them fixed before they affect customers. Likewise, a developer who intimidates coworkers is asking to be fired. I've felt intimidated after a code-review - I told my boss, and it was handled. Also, I like my job, so I kept up the feedback, positive and negative. As a reviewer, that's on me, not anyone else. I, on the other hand, am now known for being an ass for pointing out problems with the committed code. Well, that's unfortunate, you say you're being tactful. You can find more to praise, if you have more to look for. Critique the code, not the author You give an example: I have some questions about your changes in Avoid using the words "you" and "your", say, "the" changes instead. Did I miss something? [...] Why is that? Don't add rhetorical flourishes to your critiques. Don't make jokes, either. There's a rule I've heard, "If it makes you feel good to say, don't say it, it's no good." Maybe you're buffing your own ego at someone else's expense. Keep it to just the facts. Raise the bar by giving positive feedback It raises the bar to praise your fellow developers when they meet higher standards. So that means the question, How to find positive things in a code review? is a good one, and worth addressing. You can point out where the code meets ideals of higher level coding practices. Look for them to follow best practices, and to keep raising the bar. After the easier ideals become expected of everyone, you'll want to stop praising these and look for even better coding practices for praise. Language specific best practices If the language supports documentation in code, namespaces, object-oriented or functional programming features, you can call those out and congratulate the author on using them where appropriate. These matters usually fall under style-guides: Does it meet in-house language style guide standards? Does it meet the most authoritative style guide for the language (which is probably more strict than in-house - and thus still compliant with the in-house style)? Generic best practices You could find points to praise on generic coding principles, under various paradigms. For example, do they have good unittests? Do the unittests cover most of the code? Look for: unit tests that test only the subject functionality - mocking expensive functionality that is not intended to be tested. high levels of code coverage, with complete testing of APIs and semantically public functionality. acceptance tests and smoke tests that test end-to-end functionality, including functionality that is mocked for unit tests. good naming, canonical data points so code is DRY (Don't Repeat Yourself), no magic strings or numbers. variable naming so well done that comments are largely redundant. cleanups, objective improvements (without tradeoffs), and appropriate refactorings that reduce lines of code and technical debt without making the code completely foreign to the original writers. Functional Programming If the language is functional, or supports the functional paradigm, look for these ideals: avoiding globals and global state using closures and partial functions small functions with readable, correct, and descriptive names single exit points, minimizing number of arguments Object Oriented Programming (OOP) If the language supports OOP, you can praise the appropriate usage of these features: encapsulation - provides a cleanly defined and small public interface, and hides the details. inheritance - code reused appropriately, perhaps through mixins. polymorphism - interfaces are defined, perhaps abstract base classes, functions written to support parametric polymorphism. under OOP, there are also SOLID principles (maybe some redundancy to OOP features): single responsibility - each object has one stakeholder/owner open/closed - not modifying the interface of established objects Liskov substitution - subclasses can be substituted for instances of parents interface segregation - interfaces provided by composition, perhaps mixins dependency inversion - interfaces defined - polymorphism... Unix programming principles : Unix principles are modularity, clarity, composition, separation, simplicity, parsimony, transparency, robustness, representation, least surprise, silence, repair, economy, generation, optimization, diversity, and extensibility. In general, these principles can be applied under many paradigms. Your criteria These are far too trivial - I would feel condescended to if praised for this: The code will compile. There is at least one way the code will work. The code will work with most normal cases. On the other hand, these are fairly high praise, considering what you seem to be dealing with, and I wouldn't hesitate to praise developers for doing this: The code will work with most edge cases. The code will throw reasonable exception if inserted data is not valid. Writing down rules for passing code review? That's a great idea in theory, however, while I wouldn't usually reject code for bad naming, I've seen naming so bad that I would reject the code with instructions to fix it. You need to be able to reject the code for any reason. The only rule I can think of for rejecting code is there's nothing so egregious that I would keep it out of production. A really bad name is something that I would be willing to keep out of production - but you can't make that a rule. Conclusion You can praise best practices being followed under multiple paradigms, and probably under all of them, if the language supports them.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/333863", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/65792/" ] }
334,136
I suppose that this is a common situation: I test some code, discover a bug, fix it and commit the bug-fix to the repository. Assuming that many people work on this project, should I first create a bug report, assign it to myself, and refer to it in the commit message (e.g. "Fix bug #XYZ. The bug was due to X and Y. Fixed it by Q and R")? Alternatively, I can skip bug report and commit with a message such as "Fixed a bug that caused A when B. The bug was due to X and Y. Fixed it by Q and R". What is considered a better practice?
It depends on who the audience of a bug report is. If it is only looked at internally by developers, to know what needs to be fixed, then don't bother. It's just noise at that point. Non-exhaustive list of reasons to log anyway: Release-notes include information about fixed bugs (to some threshold which this bug meets) - especially if there is a vulnerability exposed by this bug Management wants a notion of "Time spent bugfixing" / "Detected bug count", etc. Customers can see the current state of the bugtracker (to see if their issue is known about, etc.) Testers get information about a change that they should test for.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/334136", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/250728/" ] }
334,289
For general-purpose programming there are literally hundreds of programming languages. But for interacting/querying the databases, why is SQL pretty much the only used language?
In addition to Basile's answer, please also recognize that SQL is not a language like you would think of an object-oriented language or procedural language. In many ways the ANSI SQL standard is more like a protocol or a series of generally accepted statements based on mathematical principles of set theory , predicate logic and relational algebra . But how individual RDBMS developers implement these standards vary significantly across various proprietary software enough so to almost classify each individual implementation as a language of its own. For instance, Oracle SQL is quite different from Microsoft SQL Server SQL which is different to MySQL , etc. On top of this, each company implements their own unique functions, database engine and (in some cases) procedural languages on top of the traditional ANSI SQL standard statements. Some even choose to abandon the standard at times in favor of their own personal implementation. The history of this marriage of SQL to the relational model is primarily due to the fact that the technology and language were developed side-by-side in the 1970s by E.F. Codd , Donald D. Chamberlin and Raymond F. Boyce. There are some pretty decent articles on Wikipedia around the topic if you have an opportunity to read about it.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/334289", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/118794/" ] }
334,417
I understand that save for breaking out of loops nested in loops; the goto statement is evaded and reviled as a bug prone style of programming, to never be used. Alt Text: "Neal Stephenson thinks it's cute to name his labels 'dengo' " See the original comic at: http://xkcd.com/292/ Because I learned this early; I don't really have any insight or experience on what types of bugs goto actually leads to. So what are we talking about here: Instability? Unmaintainable or unreadable code? Security vulnerabilities? Something else entirely? What kind of bugs do "goto" statements actually lead to? Are there any historically significant examples?
Why is goto dangerous? goto doesn't cause instability by itself. Despite about 100,000 goto s, the Linux kernel is still a model of stability. goto by itself should not cause security vulnerabilities. In some languages however, mixing it with try / catch exception management blocks could lead to vulnerabilities as explained in this CERT recommendation . Mainstream C++ compilers flag and prevent such errors, but unfortunately, older or more exotic compilers don't. goto causes unreadable and unmaintainable code. This is also called spaghetti code , because, like in a spaghetti plate, it's very difficult to follow the flow of control when there are too many gotos. Even if you manage to avoid spaghetti code and if you use only a few gotos, they still facilitate bugs like and resource leaking: Code using structure programming, with clear nested blocks and loops or switches, is easy to follow; its flow of control is very predictable. It's therefore easier to ensure that invariants are respected. With a goto statement, you break that straightforward flow, and break the expectations. For example, you might not notice that you have still to free resources. Many goto in different places can send you to a single goto target. So it's not obvious to know for sure the state you are in when reaching this place. The risk of making wrong/unfounded assumptions is hence quite big. Additional information and quotes: E.Dijkstra wrote an early essay about the topic already in 1968: " Go To Statement Considered Harmful " Brian.W.Kernighan & Dennis.M.Ritchie wrote in the C programming language: C provides the infinitely-abusable goto statement and labels to branch to. Formally the goto is never necessary, and in practice it is almost always easy to write code without it. (...) Nonetheless we will suggest a few situations where goto 's may find a place. The most common use is to abandon processing in some deeply nested structures, such as breaking out of two loops at once. (...) Although we are not dogmatic about the matter, it does seem that goto statements should be used sparingly, if at all . James Gosling & Henry McGilton wrote in their 1995 Java language environment white paper : No More Goto Statements Java has no goto statement. Studies illustrated that goto is (mis)used more often than not simply “because it’s there”. Eliminating goto led to a simplification of the language (...) Studies on approximately 100,000 lines of C code determined that roughly 90 percent of the goto statements were used purely to obtain the effect of breaking out of nested loops. As mentioned above, multi-level break and continue remove most of the need for goto statements. Bjarne Stroustrup defines goto in his glossary in these inviting terms: goto - the infamous goto. Primarily useful in machine generated C++ code. When could goto be used? Like K&R I'm not dogmatic about gotos. I admit that there are situations where goto could be ease one's life. Typically, in C, goto allows multilevel loop exit, or error handling requiring to reach an appropriate exit point that frees/unlocks all the resources that were allocated so far (i.e.multiple allocation in sequence means multiple labels). This article quantifies the different uses of the goto in the Linux kernel. Personally I prefer to avoid it and in 10 years of C, I used maximum 10 gotos. I prefer to use nested if s, which I think are more readable. When this would lead to a too deep nesting, I'd opt either to decompose my function in smaller parts, or use an boolean indicator in cascade. Today's optimizing compilers are clever enough to generate almost the same code than the same code with goto . The use of goto heavily depends on the language: In C++, proper use of RAII causes the compiler to automatically destroy objects that go out of scope, so that the resources/lock will be cleaned anyway, and no real need for goto any more. In Java there's no need for goto (see Java's author quote above and this excellent Stack Overflow answer ): the garbage collector that cleans the mess, break , continue , and try / catch exception handling cover all the case where goto could be helpful, but in a safer and better manner. Java's popularity proves that goto statement can be avoided in a modern language. Zoom on the famous SSL goto fail vulnerability Important Disclaimer: in view of the fierce discussion in the comments, I want to clarify that I don't pretend that the goto statement is the only cause of this bug. I don't pretend that without goto there would be no bug. I just want to show that a goto can be involved in a serious bug. I don't know how many serious bugs are related to goto in the history of programming: details are often not communicated. However there was a famous Apple SSL bug that weakened the security of iOS. The statement that led to this bug was a wrong goto statement. Some argue that the root cause of the bug was not the goto statement in itself, but a wrong copy/paste, a misleading indentation, missing curly braces around the conditional block, or perhaps the working habits of the developer. I can't neither confirm any of them: all these arguments are probable hypotheses and interpretation. Nobody really knows. ( meanwhile, the hypothesis of a merge that went wrong as someone suggested in the comments seems to be a very good candidate in view of some other indentation inconsistencies in the same function ). The only objective fact is that a duplicated goto led to exit the function prematurely. Looking at the code, the only other single statement that could have caused the same effect would have been a return. The error is in function SSLEncodeSignedServerKeyExchange() in this file : if ((err = ReadyHash(&SSLHashSHA1, &hashCtx)) != 0) goto fail; if ((err =...) !=0) goto fail; if ((err = SSLHashSHA1.update(&hashCtx, &signedParams)) != 0) goto fail; goto fail; // <====OUCH: INDENTATION MISLEADS: THIS IS UNCONDITIONDAL!! if (...) goto fail; ... // Do some cryptographic operations here fail: ... // Free resources to process error Indeed curly braces around the conditional block could have prevented the bug: it would have led either to a syntax error at compilation (and hence a correction) or to a redundant harmless goto. By the way, GCC 6 would be able to spot these errors thanks to its optional warning to detect inconsistent indentation. But in first place, all these gotos could have been avoided with more structured code. So goto is at least indirectly a cause of this bug. There are at least two different ways that could have avoided it: Approach 1: if clause or nested if s Instead of testing lots of conditions for error sequentially, and each time sending to a fail label in case of problem, one could have opted for executing the cryptographic operations in an if -statement that would do it only if there was no wrong pre-condition: if ((err = ReadyHash(&SSLHashSHA1, &hashCtx)) == 0 && (err = ...) == 0 ) && (err = ReadyHash(&SSLHashSHA1, &hashCtx)) == 0) && ... (err = ...) == 0 ) ) { ... // Do some cryptographic operations here } ... // Free resources Approach 2: use an error accumulator This approach is based on the fact that almost all the statements here call some function to set an err error code, and execute the rest of the code only if err was 0 (i.e., function executed without error). A nice safe and readable alternative is: bool ok = true; ok = ok && (err = ReadyHash(&SSLHashSHA1, &hashCtx))) == 0; ok = ok && (err = NextFunction(...)) == 0; ... ok = ok && (err = ...) == 0; ... // Free resources Here, there is not a single goto: no risk to jump to quickly to the failure exit point. And visually it would be easy to spot a misaligned line or a forgotten ok && . This construct is more compact. It is based on the fact that in C, the second part of a logical and ( && ) is evaluated only if the first part is true. In fact, the assembler generated by an optimizing compiler is almost equivalent to the original code with gotos: The optimizer detects very well the chain of conditions and generate code, which at the first non null return value jumps to the end ( online proof ). You could even envisage a consistency check at the end of the function that could during the testing phase identify mismatches between the ok flag and the error code. assert( (ok==false && err!=0) || (ok==true && err==0) ); Mistakes such of a ==0 inadvertently replaced with a !=0 or logical connector errors would easily be spotted during the debugging phase. As said: I don't pretend that alternative constructs would have avoided any bug. I just want to say that they could have made the bug more difficult to occur.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/334417", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/136084/" ] }
334,488
Several teams at my company practice a code review workflow I've never seen before. I am trying to understand the thinking behind it, with the idea that there's value in making the whole company consistent. (I contribute to multiple codebases and have been tripped up by the differences in the past.) Code author submits a pull request Reviewer examines the code If the reviewer approves, they leave a comment along the lines of "Looks good, feel free to merge" If the reviewer has concerns, they leave a comment like "Please fix minor issues X and Y, then merge" (For major changes, return to step 2) The code author makes changes if necessary, and then merges his or her own pull request I have the following concerns: In the case of approval at step 3, this workflow creates a seemingly-unnecessary roundtrip to the pull request author. The reviewer, who is already looking at the code, could just merge it immediately. In the case of changes being requested at step 3, the agency to merge the pull request now rests solely with the PR's author. No one besides the author will look at the changes prior to merging. What are some other advantages or disadvantages to this workflow? Is this workflow common on other engineering teams?
In the first case, it's usually a courtesy. In most organizations, merges kick off a series of automated tests which must be dealt with promptly if they fail. Especially if there was a significant delay between when a pull request was submitted and when it was reviewed, it's polite to allow it to be merged on the author's timetable, so they have time to deal with any unexpected fallout. The easiest way to do that is to let them merge it themselves. Also, sometimes the author becomes aware of reasons later that a pull request shouldn't be merged yet. Maybe another developer's PR is higher priority and would cause conflicts. Maybe she thought of an uncovered use case. Maybe a review comment triggered a brainstorm that needs further investigation before the issue is fully satisfied. The author knows the most about the code, and it makes sense to give him or her the last word about when it gets merged. On the second point, that's just a matter of trust. If you can't trust people to fix minor issues without being double-checked, they shouldn't be working for you. If the issue is big enough that it will need another review after the fix, then trust reviewers to ask for one. That being said, I do occasionally merge other author's pull requests, but it's usually either very simple changes, or from external sources, where I personally take responsibility for shepherding through any test automation failures.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/334488", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/189386/" ] }
334,528
According to Wikipedia, the 90 / 10 rule of program optimization states that “90% of a program execution time is spent in executing 10% of the code” (see the second paragraph here ). I really don't understand this. What exactly does this mean? How can 90% of the execution time be spent only executing 10% of the code? What about the other 90% of the code then? How can they be executed in just 10% of the time?
There are two basic principles in play here: Some code is executed much more often than other code. For example, some error handling code might never be used. Some code will be executed only when you start your program. Other code will be executed over and over while your program runs. Some code takes much longer to run than other code. For example, a single line that runs a query on a database, or pulls a file from the internet will probably take longer than millions of mathematical operations. The 90/10 rule isn't literally true. It varies by program (and I doubt there is any basis to the specific numbers 90 and 10 at all; someone probably pulled them out of thin air). But the point is, if you need your program to run faster, probably only a small number of lines is significant to making that happen. Identifying the slow parts of your software is often the biggest part of optimisation. This is an important insight, and it means that decisions that seem counterintuitive to a new developer can often be correct. For example: There is lots of code that it is not worth your time to make "better" , even if it is doing things in a dumb, simplistic way. Could you write a more efficient search algorithm for application XYZ? Yes, but actually a simple comparison of every value takes a trivial amount of time, even though there are thousands of values. So it's just not worth it. It can be tough for new developers to avoid unnecessary optimisation, because in their degree program so much time was spent on writing the "correct" (meaning most efficient) algorithm. But in the real world, the correct algorithm is any one that works and runs fast enough. Changes that make your code much longer and more complex may still be a performance win. For example, in application FOO it may be worth adding hundreds of lines of new logic, just to avoid a single database call.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/334528", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/164805/" ] }
334,624
In my experience, many of the projects I have read in the past didn't have relationship definitions in the database, instead they only defined them in the source code. So I'm wondering what are the advantages/disadvantages of defining relations between tables in the database and in source code? And the broader question is about other advanced features in modern databases like cascade, triggers, procedures... There are some points in my thoughts: In the database: Correct data from design. Prevent application errors which can cause invalid data. Reduce network round trip to application when inserting/updating data as application has to make more query(s) to check data integrity. In source code: More flexible. Better when scaling to multiple databases, as sometimes the relation can be cross-database. More control over data integrity. The database doesn't have to check every time the application modifies data (complexity can be O(n) or O(n log n) (?)). Instead, it's delegated to application. And I think handling data integrity in the application will lead to more verbose error messages than using the database. Eg: when you create an API server, if you define the relations in the database, and something goes wrong (like the referenced entity doesn't exist), you will get an SQL Exception with a message. The simple way will be to return 500 to the client that there is an "Internal server error" and the client will have no idea what is going wrong. Or the server can parse the message to figure out what's wrong, which is an ugly, error-prone way in my opinion. If you let the application handle this, the server can generate a more meaningful message to client. Is there anything else? Edit: as Kilian points out, my point about performance & data integrity is very misguided. So I edited to correct my point there. I totally understand that letting the database handle it will be a more efficient and robust approach. Please check the updated question and give some thoughts about it. Edit: thank you everyone. The answers I received all point out that the constraints/relations should be defined in the database. :). I have one more question, as it is quite out of scope of this question, I've just posted it as a separate question: Handle database error for API server . Please leave some insights.
The database doesn't have to check for data integrity every time application modify data. This is a deeply misguided point. Databases were created for precisely this purpose. If you need data integrity checks (and if you think you don't need them, you're probably mistaken), then letting the database handle them is almost certainly more efficient and less error-prone than doing it in application logic.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/334624", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/242580/" ] }
334,675
The following code examples provide context to my question. The Room class is initialized with a delegate. In the first implementation of the Room class, there are no guards against delegates that throw exceptions. Such exceptions will bubble up to the North property, where the delegate is evaluated (note: the Main() method demonstrates how a Room instance is used in client code): public sealed class Room { private readonly Func<Room> north; public Room(Func<Room> north) { this.north = north; } public Room North { get { return this.north(); } } public static void Main(string[] args) { Func<Room> evilDelegate = () => { throw new Exception(); }; var kitchen = new Room(north: evilDelegate); var room = kitchen.North; //<----this will throw } } Being that I'd rather fail upon object creation rather than when reading the North property, I change the constructor to private, and introduce a static factory method named Create(). This method catches the exception thrown by the delegate, and throws a wrapper exception, having a meaningful exception message: public sealed class Room { private readonly Func<Room> north; private Room(Func<Room> north) { this.north = north; } public Room North { get { return this.north(); } } public static Room Create(Func<Room> north) { try { north?.Invoke(); } catch (Exception e) { throw new Exception( message: "Initialized with an evil delegate!", innerException: e); } return new Room(north); } public static void Main(string[] args) { Func<Room> evilDelegate = () => { throw new Exception(); }; var kitchen = Room.Create(north: evilDelegate); //<----this will throw var room = kitchen.North; } } Does the try-catch block render the Create() method impure?
Yes. That is effectively an impure function. It creates a side-effect: program execution continues somewhere other than the place to which the function is expected to return. To make it a pure function, return an actual object that encapsulates the expected value from the function and a value indicating a possible error condition, like a Maybe object or a Unit of Work object.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/334675", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/249648/" ] }
334,769
I've been told that in functional programming one is not supposed to throw and/or observe exceptions. Instead an erroneous calculation should be evaluated as a bottom value. In Python (or other languages that do not fully encourage functional programming) one can return None (or another alternative treated as the bottom value, though None doesn't strictly comply with the definition) whenever something goes wrong to "remain pure", but to do so one has to observe an error in the first place, i.e. def fn(*args): try: ... do something except SomeException: return None Does this violate purity? And if so, does it mean, that it is impossible to handle errors purely in Python? Update In his comment Eric Lippert reminded me of another way to treat exceptions in FP. Though I've never seen that done in Python in practice, I played with it back when I studied FP a year ago. Here any optional -decorated function returns Optional values, which can be empty, for normal outputs as well as for a specified list of exceptions (unspecified exceptions still can terminate the execution). Carry creates a delayed evaluation, where each step (delayed function call) either gets a nonempty Optional output from the previous step and simply passes it on, or otherwise evaluates itself passing a new Optional . In the end the final value is either normal or Empty . Here the try/except block is hidden behind a decorator, so the specified exceptions can be regarded as part of the return type signature. class Empty: def __repr__(self): return "Empty" class Optional: def __init__(self, value=Empty): self._value = value @property def value(self): return Empty if self.isempty else self._value @property def isempty(self): return isinstance(self._value, BaseException) or self._value is Empty def __bool__(self): raise TypeError("Optional has no boolean value") def optional(*exception_types): def build_wrapper(func): def wrapper(*args, **kwargs): try: return Optional(func(*args, **kwargs)) except exception_types as e: return Optional(e) wrapper.__isoptional__ = True return wrapper return build_wrapper class Carry: """ >>> from functools import partial >>> @optional(ArithmeticError) ... def rdiv(a, b): ... return b // a >>> (Carry() >> (rdiv, 0) >> (rdiv, 0) >> partial(rdiv, 1))(1) 1 >>> (Carry() >> (rdiv, 0) >> (rdiv, 1))(1) 1 >>> (Carry() >> rdiv >> rdiv)(0, 1) is Empty True """ def __init__(self, steps=None): self._steps = tuple(steps) if steps is not None else () def _add_step(self, step): fn, *step_args = step if isinstance(step, Sequence) else (step, ) return type(self)(steps=self._steps + ((fn, step_args), )) def __rshift__(self, step) -> "Carry": return self._add_step(step) def _evaluate(self, *args) -> Optional: def caller(carried: Optional, step): fn, step_args = step return fn(*(*step_args, *args)) if carried.isempty else carried return reduce(caller, self._steps, Optional()) def __call__(self, *args): return self._evaluate(*args).value
First of all, let's clear up some misconceptions. There is no "bottom value". The bottom type is defined as a type that is a subtype of every other type in the language. From this, one can prove (in any interesting type system at least), that the bottom type has no values - it is empty . So there is no such thing as a bottom value. Why is the bottom type useful? Well, knowing that it's empty let's us make some deductions on program behavior. For example, if we have the function: def do_thing(a: int) -> Bottom: ... we know that do_thing can never return, since it would have to return a value of type Bottom . Thus, there are only two possibilities: do_thing does not halt do_thing throws an exception (in languages with an exception mechanism) Note that I created a type Bottom which does not actually exist in the Python language. None is a misnomer; it is actually the unit value , the only value of the unit type , which is called NoneType in Python (do type(None) to confirm for yourself). Now, another misconception is that functional languages do not have exception. This isn't true either. SML for example has a very nice exception mechanism. However, exceptions are used much more sparingly in SML than in e.g. Python. As you've said, the common way to indicate some kind of failure in functional languages is by returning an Option type. For example, we would create a safe division function as follows: def safe_div(num: int, den: int) -> Option[int]: return Some(num/den) if den != 0 else None Unfortunately, since Python doesn't actually have sum types, this isn't a viable approach. You could return None as a poor-man's option type to signify failure, but this is really no better than returning Null . There is no type-safety. So I would advise following the language's conventions in this case. Python uses exceptions idiomatically to handle control flow (which is bad design, IMO, but it's standard nonetheless), so unless you're only working with code you wrote yourself, I'd recommend following standard practice. Whether this is "pure" or not is irrelevant.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/334769", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/184339/" ] }
334,834
I'm trying to convince my team lead to allow using exceptions in C++ instead of returning a bool isSuccessful or an enum with the error code. However, I can't counter this criticism of his. Consider this library: class OpenFileException() : public std::runtime_error { } void B(); void C(); /** Does blah and blah. */ void B() { // The developer of B() either forgot to handle C()'s exception or // chooses not to handle it and let it go up the stack. C(); }; /** Does blah blah. * * @raise OpenFileException When we failed to open the file. */ void C() { throw new OpenFileException(); }; Consider a developer calling the B() function. He checks its documentation and sees that it returns no exceptions, so he doesn't try to catch anything. This code could crash the program in production. Consider a developer calling the C() function. He doesn't check the documentation so doesn't catch any exceptions. The call is unsafe and could crash the program in production. But if we check for errors in this way: void old_C(myenum &return_code); A developer using that function will be warned by the compiler if he doesn't provide that argument, and he'd say "Aha, this returns an error code I must check for." How can I use exceptions safely, so that there's some sort of contract?
This is a legitimate criticism of exceptions. They are often less visible than simple error handling such as returning a code. And there is no easy way to enforce a "contract". Part of the point is to enable you to let exceptions be caught at a higher level (if you have to catch every exception at every level, how different is it from returning an error code, anyway?). And this means that your code could be called by some other code that doesn't handle it appropriately. Exceptions do have downsides; you have to make a case based on cost-benefit. I found these two articles helpful: The necessity of exceptions and Everything wrong with exceptions. Also, this blog post offers opinions of many experts on exceptions, with a focus on C++ . While expert opinion seems to lean in favor of exceptions, it is far from a clear consensus. As for convincing your team lead, this might not be the right battle to pick. Especially not with legacy code. As noted in the second link above: Exceptions cannot be propagated through any code which is not exception safe. The use of exceptions thus implies that all code in the project must be exception safe. Adding a little bit of code which uses exceptions to a project that mainly does not is probably not going to be an improvement. Not using exceptions in otherwise well-written code is far from a catastrophic problem; it might not be a problem at all, depending on the application and which expert you ask. You have to pick your battles. This is probably not an argument I would spend effort on--at least not until a new project is started. And even if you have a new project, is it going to use or be used by any legacy code?
{ "source": [ "https://softwareengineering.stackexchange.com/questions/334834", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/121394/" ] }
334,910
Let's assume I want to write a function that concatenates two strings in C. The way I would write it is: void concat(char s[], char t[]){ int i = 0; int j = 0; while (s[i] != '\0'){ i++; } while (t[j] != '\0'){ s[i] = t[j]; i++; j++; } s[i] = '\0'; } However, K&R in their book implemented it differently, particularly including as much in the condition part of the while loop as possible: void concat(char s[], char t[]){ int i, j; i = j = 0; while (s[i] != '\0') i++; while ((s[i++]=t[j++]) != '\0'); } Which way is preferred? Is it encouraged or discouraged to write code the way K&R do? I believe my version would be easier to read by other people.
Always prefer clarity over cleverness. In yesteryears the best programmer was that whose code nobody could understand. "I cannot make sense of his code, he must be a genius" , they said. Nowadays the best programmer is that whose code anyone can understand. Computer time is cheaper now than programmer's time. Any fool can write code that a computer can understand. Good programmers write code that humans can understand. (M. Fowler) So, no doubt, I'd go for option A. And that is my definitive answer.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/334910", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/185726/" ] }
334,970
Is there any purpose for declaring an init() method for a type? I'm not asking whether we should prefer init() over a constructor or how to avoid declaring init() . I'm asking if there is any rationale behind declaring an init() method (seeing how common it is) or if it's a code smell and should be avoided. The init() idiom is quite common, but I have yet to see any real benefit. I'm talking about types that encourage initialization via a method: class Demo { public void init() { //... } } When will this ever be of use in production code? I feel it may be a code smell since it suggests the constructor does not fully initialize the object, resulting in a partially created object. The object should not exist if it's state isn't set. This makes me believe it may be part of some kind of technique used to speed up production, in the sense of enterprise applications. It is the only logical reason I can think of having such an idiom, I'm just not sure how it would be beneficial if so.
Yes, it's a code smell. A code smell isn't something that necessarily always needs to get removed. It's something that makes you take a second look. Here you have an object in two fundamentally different states: pre-init and post-init. Those states have different responsibilities, different methods that are allowed to be called, and different behavior. It's effectively two different classes. If you physically make them two separate classes you will statically remove a whole class of potential bugs, at the cost of maybe making your model not match the "real world model" quite as closely. You usually name the first one Config or Setup or something like that. So next time, try refactoring your construct-init idioms into two-class models and see how it turns out for you.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/334970", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/139941/" ] }
335,005
I was trying to find alternatives to the use of global variable in some legacy code. But this question is not about the technical alternatives, I'm mainly concerned about the terminology . The obvious solution is to pass a parameter into the function instead of using a global. In this legacy codebase that would mean that I have to change all functions in the long call chain between the point where the value will eventually be used and the function that receives the parameter first. higherlevel(newParam)->level1(newParam)->level2(newParam)->level3(newParam) where newParam was previously a global variable in my example, but it could have been a previously hardcoded value instead. The point is that now the value of newParam is obtained at higherlevel() and has to "travel" all the way to level3() . I was wondering if there was a name(s) for this kind of situation/pattern where you need to add a parameter to many functions that just "pass" the value unmodified. Hopefully, using the proper terminology will allow me to find more resources about solutions for redesign and describe this situation to colleagues.
The data itself is called "tramp data" . It is a "code smell", indicating that one piece of code is communicating with another piece of code at a distance, through intermediaries. Increases rigidity of code, especially in the call chain. You are much more constrained in how you refactor any method in the call chain. Distributes knowledge about data/methods/architecture to places that don't care in the least about it. If you need to declare the data that is just passing through, and the declaration requires a new import, you have polluted the name space. Refactoring to remove global variables is difficult, and tramp data is one method of doing so, and often the cheapest way. It does have its costs.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/335005", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/87597/" ] }
335,068
Let's have this C# class (it would be almost the same in Java) public class MyClass { public string A {get; set;} public string B {get; set;} public override bool Equals(object obj) { var item = obj as MyClass; if (item == null || this.A == null || item.A == null) { return false; } return this.A.equals(item.A); } public override int GetHashCode() { return A != null ? A.GetHashCode() : 0; } } As you can see, equality of two instances of MyClass depends on A only. So there can be two instances that are equal, but holding different piece of information in their B property. In a standard collection library of many languages (including C# and Java, of course) there is a Set ( HashSet in C#), which a collection, which can hold at most one item from each set of equal instances. One can add items, remove items and check if the set contains an item. But why is it impossible to get a particular item from the set? HashSet<MyClass> mset = new HashSet<MyClass>(); mset.Add(new MyClass {A = "Hello", B = "Bye"}); //I can do this if (mset.Contains(new MyClass {A = "Hello", B = "See you"})) { //something } //But I cannot do this, because Get does not exist!!! MyClass item = mset.Get(new MyClass {A = "Hello", B = "See you"}); Console.WriteLine(item.B); //should print Bye The only way to retrieve my item is to iterate over the whole collection and check all items for equality. However, this takes O(n) time instead of O(1) ! I haven't found any language that supports get from a set so far. All "common" languages I know (Java, C#, Python, Scala, Haskell...) seem to be designed in the same way: you can add items, but you cannot retrieve them. Is there any good reason why all these languages do not support something that easy and obviously useful? They cannot be just all wrong, right? Are there any languages that do support it? Maybe retreiving a particular item from a set is wrong, but why? There are a few related SO questions: https://stackoverflow.com/questions/7283338/getting-an-element-from-a-set https://stackoverflow.com/questions/7760364/how-to-retrieve-actual-item-from-hashsett
The problem here isn't that HashSet lacks a Get method, it's that your code makes no sense from the perspective of the HashSet type. That Get method is effectively, "get me this value , please", to which the .NET framework folk would sensibly reply, "eh? You already have that value <confused face /> ". If you want to store items and then retrieve them based on matching another slightly different value, then use Dictionary<String, MyClass> as you can then do: var mset = new Dictionary<String, MyClass>(); mset.Add("Hello", new MyClass {A = "Hello", B = "Bye"}); var item = mset["Hello"]; Console.WriteLine(item.B); // will print Bye The information of equality leaks from the encapsulated class. If I wanted to change the set of properties involved in Equals , I would have to change the code outside MyClass ... Well yes, but that's because MyClass runs amok with the principle of least astonishment (POLA). With that equality functionality encapsulated, it is completely reasonable to assume that the following code is valid: HashSet<MyClass> mset = new HashSet<MyClass>(); mset.Add(new MyClass {A = "Hello", B = "Bye"}); if (mset.Contains(new MyClass {A = "Hello", B = "See you"})) { // this code is unreachable. } To prevent this, MyClass needs to be clearly documented as to its odd form of equality. Having done that, it's no longer encapsulated and changing how that equality works would break the open/closed principle. Ergo, it shouldn't change and therefore Dictionary<String, MyClass> is a good solution for this odd requirement.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/335068", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/252180/" ] }
335,210
In a recent project of mine, I defined a class with the following header: public class Node extends ArrayList<Node> { ... } However, after discussing with my CS professor, he stated that the class would both be "horrible for memory" and "bad practice". I have not found the first to be particularly true, and the second to be subjective. My reasoning for this usage is that I had an idea for an object which needed to be defined as something that could have arbitrary depth, where the behavior of an instance could be defined either by a custom implementation or by the behavior of several like objects interacting. This would allow for the abstraction of objects whose physical implementation would be made up of many sub-components interacting.¹ On the other hand, I see how this could be bad practice. The idea of defining something as a list of itself is not simple or physically implementable. Is there any valid reason why I shouldn't use this in my code, considering my use for it? ¹ If I need to explain this further, I would be glad to; I am just attempting to keep this question concise.
Frankly, I don't see the need for inheritance here. It doesn't make sense; Node is an ArrayList of Node ? If this is just a recursive data structure, you would simply write something like: public class Node { public String item; public List<Node> children; } Which does make sense; node has a list of children or descendant nodes.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/335210", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/218643/" ] }
335,241
I often wonder why Java uses :: for method references instead of . , e.g. why write System.out::println instead of System.out.println Of course, one might simply answer: "because the designers decided so". On the other hand, I would have expected the second syntax because the dot is the usual Java syntax for accessing class members. So is there any known special reason for introducing the new :: syntax instead of using the existing . convention for method references?
This is to avoid ambiguity in case if class has (static) member with the same name as method (Java allows that). It is easy to see from code snippet in Java tutorial about method references : Because this lambda expression invokes an existing method, you can use a method reference instead of a lambda expression: Arrays.sort(rosterAsArray, Person::compareByAge); If class Person in above snippet would also have member named compareByAge (of the type appropriate to pass to Arrays.sort ), dot notation wouldn't allow to tell whether parameter refers to method or member.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/335241", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/29020/" ] }
335,247
Let's assume I have two classes that look like this (the first block of code and the general problem are related to C#): class A { public int IntProperty { get; set; } } class B { public int IntProperty { get; set; } } These classes cannot be changed in any way (they are part of a 3rd party assembly). Therefore, I cannot make them implement the same interface, or inherit the same class that would then contain IntProperty. I want to apply some logic on the IntProperty property of both classes, and in C++ I could use a template class to do that quite easily: template <class T> class LogicToBeApplied { public: void T CreateElement(); }; template <class T> T LogicToBeApplied<T>::CreateElement() { T retVal; retVal.IntProperty = 50; return retVal; } And then I could do something like this: LogicToBeApplied<ClassA> classALogic; LogicToBeApplied<ClassB> classBLogic; ClassA classAElement = classALogic.CreateElement(); ClassB classBElement = classBLogic.CreateElement(); That way I could create a single generic factory class that would work for both ClassA and ClassB. However, in C#, I have to write two classes with two different where clauses even though the code for the logic is exactly the same: public class LogicAToBeApplied<T> where T : ClassA, new() { public T CreateElement() { T retVal = new T(); retVal.IntProperty = 50; return retVal; } } public class LogicBToBeApplied<T> where T : ClassB, new() { public T CreateElement() { T retVal = new T(); retVal.IntProperty = 50; return retVal; } } I know that if I want to have different classes in the where clause, they need to be related, i.e. to inherit the same class, if I want to apply the same code to them in the sense that I described above. It is just that it is very annoying to have two completely identical methods. I also do not want to use reflection because of the performance issues. Can somebody suggest some approach where this can be written in a more elegant fashion?
Add a proxy interface (sometimes called an adapter , occasionally with subtle differences), implement LogicToBeApplied in terms of the proxy, then add a way to construct an instance of this proxy from two lambdas: one for the property get and one for the set. interface IProxy { int Property { get; set; } } class LambdaProxy : IProxy { private Function<int> getFunction; private Action<int> setFunction; int Property { get { return getFunction(); } set { setFunction(value); } } public LambdaProxy(Function<int> getter, Action<int> setter) { getFunction = getter; setFunction = setter; } } Now, whenever you need to pass in an IProxy but have an instance of the third party classes, you can just pass in some lambdas: A a = new A(); B b = new B(); IProxy proxyA = new LambdaProxy(() => a.Property, (val) => a.Property = val); IProxy proxyB = new LambdaProxy(() => b.Property, (val) => b.Property = val); proxyA.Property = 12; // mutates the proxied `a` as well Additionally, you can write simple helpers to construct LamdaProxy instances from instances of A or B. They can even be extension methods to give you a "fluent" style: public static class ProxyExtension { public static IProxy Proxied(this A a) { return new LambdaProxy(() => a.Property, (val) => a.Property = val); } public static IProxy Proxied(this B b) { return new LambdaProxy(() => b.Property, (val) => b.Property = val); } } And now construction of proxies looks like this: IProxy proxyA = new A().Proxied(); IProxy proxyB = new B().Proxied(); As for your factory, I'd see if you can refactor it into a "main" factory method that accepts an IProxy and performs all logic on it and other methods that just pass in new A().Proxied() or new B().Proxied() : public class LogicToBeApplied { public A CreateA() { A a = new A(); InitializeProxy(a.Proxied()); return a; // or maybe return the proxy if you'd rather use that } public B CreateB() { B b = new B(); InitializeProxy(b.Proxied()); return b; } private void InitializeProxy(IProxy proxy) { proxy.IntProperty = 50; } } There's no way to do the equivalent of your C++ code in C# because C++ templates rely on structural typing. As long as two classes have the same method name and signature, in C++ you can call that method generically on both of them. C# has nominal typing - the name of a class or interface is part of its type. Therefore, the classes A and B cannot be treated the same in any capacity unless an explicit "is a" relationship is defined through either inheritance or interface implementation. If the boilerplate of implementing these methods per class is too much, you can write a function that takes an object and reflectively builds a LambdaProxy by looking for a specific property name: public class ReflectiveProxier { public object proxyReflectively(object proxied) { PropertyInfo prop = proxied.GetType().GetProperty("Property"); return new LambdaProxy( () => prop.GetValue(proxied), (val) => prop.SetValue(proxied, val)); } } This fails abysmally when given objects of incorrect type; reflection inherently introduces the possibility of failures the C# type system cannot prevent. Luckily you can avoid reflection until the maintenance burden of the helpers becomes too great because you're not required to modify the IProxy interface or the LambdaProxy implementation to add the reflective sugar. Part of the reason this works is that LambdaProxy is "maximally generic"; it can adapt any value that implements the "spirit" of the IProxy contract because the implementation of LambdaProxy is completely defined by the given getter and setter functions. It even works if the classes have different names for the property, or different types that are sensibly and safely representable as int s, or if there's some way to map the concept that Property is supposed to represent to any other features of the class. The indirection provided by the functions gives you maximal flexibility.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/335247", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/235135/" ] }
335,321
I started a new job recently where I am working on a very large application (15M loc). In my previous job we had a similarly large application but (for better or for worse) we used OSGi, which meant the application was broken down into lots of microservices that could be independently changed, compiled, and deployed. The new application is just one large code base, with maybe a couple of .dlls. So I need to change the interface of this class, because that's what my boss asked me to do. They initially wrote it with some assumptions that didn't generalize too well, and for a while they have been avoiding the problem of refactoring because it's so tightly coupled. I changed the interface and now there are over 25000 errors. Some of the errors are in classes with important sounding names like "XYZPriceCalculator" which reaaally should not break. But I can't start up the application to check if it's working until all the errors are resolved. And many of the unit tests either directly reference that interface, or are coupled to base classes which reference that interface, so just fixing those is a pretty huge task in itself. Plus, I don't really know how all these pieces fit together, so even if I could get it to start, I don't really know what it would look like if things were broken. I never really faced a problem like this at my last job. What do I do?
25000 errors basically means "don't touch that". Change it back. Create a new class that has the desired interface and slowly move the consumers of the class to the new one. Depending on the language, you can mark the old class as deprecated, which may cause all sorts of compiler warnings, but won't actually break your build. Unfortunately these things happen in older code bases. There's not much you can do about it except slowly make things better. When you create the new classes, make sure you properly test them and create them using SOLID principles so they will be easier to change in the future.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/335321", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/252528/" ] }
335,504
Languages like C, Java, and C++ all require parenthesis around an entire expression when used in an if , while , or switch . if (true) { // Do something } as opposed to if true { // Do something } This seems odd to me because the parenthesis are redundant. In this example, true is a single expression on its own. The parenthesis do not transform its meaning in any way I know of. Why does this odd syntax exist and why is it so common? Is there a benefit to it I'm not aware of?
There needs to be some way of telling where the condition ends and the branch begins. There are many different ways of doing that. In some languages, there are no conditionals at all , e.g. in Smalltalk, Self, Newspeak, Io, Ioke, Seph, and Fancy. Conditional branching is simply implemented as a normal method like any other method. The method is implemented on booleans objects and gets called on a boolean. That way, the condition is simply the receiver of the method, and the two branches are two arguments, e.g. in Smalltalk: aBooleanExpression ifTrue: [23] ifFalse: [42]. In case, you are more familiar with Java, this is equivalent to the following: aBooleanExpression.ifThenElse(() -> 23, () -> 42); In the Lisp family of languages, the situation is similar: conditionals are just normal functions (actually, macros) and the first argument is the condition, the second and third argument are the branches, so they are just normal function arguments, and there is nothing special needed to delimit them: (if aBooleanExpression 23 42) Some languages use keywords as delimiters, e.g. Algol, Ada, BASIC, Pascal, Modula-2, Oberon, Oberon-2, Active Oberon, Component Pascal, Zonnon, Modula-3: IF aBooleanExpression THEN RETURN 23 ELSE RETURN 42; In Ruby, you can use either a keyword or an expression separator (semicolon or newline): if a_boolean_expression then 23 else 42 end if a_boolean_expression; 23 else 42 end # non-idiomatic, the minimum amount of whitespace required syntactically if a_boolean_expression 23 else 42 end # idiomatic, although only the first newline is required syntactically if a_boolean_expression 23 else 42 end Go requires the branches to be blocks and doesn't allow expressions or statements, which makes the curly braces mandatory. Therefore, parentheses aren't required, although you can add them if you want; Perl6 and Rust are similar in this regard: if aBooleanExpression { return 23 } else { return 42 } Some languages use other non-alphanumeric characters to delimit the condition, e.g. Python: if aBooleanExpression: return 23 else: return 42 The bottom line is: you need some way of telling where the condition ends and the branch begins. There are many ways of doing so, parentheses are just one of them.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/335504", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/252842/" ] }
335,512
When trying to learn about proper comment practices, I found a lot of conflicting opinions, and it's obviously a very subjective topic. So I'm not going to ask "Should I comment, or should I not?" The question I'd like to pose is, for me, as a self taught developer interested in applying for programming jobs in the future, a vital one: Of the two strategies "Comment the right code in the right way." - (comments where needed) and "Instead of writing comments, write more readable code." - (if you need comments, your code is bad) are both valid strategies in programming? If I submit an application with code samples including masterfully written code and no comments, are experienced programmers likely to be familiar with that approach? I'm not asking "Will the specific guy reading my application like my coding style", obviously no one can guess that, I'm just looking for a general answer as to whether both practices are common, or if my code will be seen as terrible across the board.
Your question implies a dichotomy that doesn't exist. Should you write cleaner code, if doing so will make it clear enough where you don't have to document it? Absolutely. Should you write comments and otherwise provide documentation, when it will improve the clarity and understanding of the code? Of course. Why are these two things not mutually exclusive? Some of the code that you write will favor better performance over high readability. Architecture is not self-documenting. While code in isolation is generally fairly easy to understand if it's written cleanly, the architecture surrounding it is not always. Design decisions are not always self-evident. What business process or problem the code solves is not always self-evident. In short, you can't write non-trivial code in such a way that it is always completely self-describing. Part of your job is to insure that the fellow coming after you doesn't require a year to figure out your code, and for that, good documentation is a necessity.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/335512", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/239652/" ] }
335,598
I have a class called Heading that does a few things, but it should also be a able to return the opposite of the current heading value, which finally has to be used via creating a new instance of the Heading class itself. I can have a simple property called reciprocal to return the opposite heading of the current value and then manually create a new instance of the Heading class, or I can create a method like createReciprocalHeading() to automatically create a new instance of the Heading class and return it to the user. However, one of my colleagues recommended me to just create a class property called reciprocal that returns a new instance of the class itself via its getter method. My question is: Isn't it an anti-pattern for a property of a class to behave like that? I particularly find this less intuitive because: In my mind, a property of a class should not return a new instance of a class, and The name of the property, which is reciprocal , doesn't help the developer to fully understand its behaviour, without getting help from the IDE or checking the getter signature. Am I being too strict about what a class property should do or is it a valid concern? I have always tried to manage the state of the class via its fields and properties and its behaviour via its methods, and I fail to see how this fits to the definition of the class property.
Its not unknown to have things like Copy() or Clone() but yes I think you are right to be worried about this one. take for example: h2 = h1.reciprocal h2.Name = "hello world" h1.reciprocal.Name = ? It would be nice to have some warning that the property was a new object each time. you might also assume that: h2.reciprocal == h1 However. If your heading class was an immutable value type then you would be able to implement these relationships and reciprocal might be a good name for the operation
{ "source": [ "https://softwareengineering.stackexchange.com/questions/335598", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/130282/" ] }
335,654
I have seen lots of advice about git branching models and the most common opinion seems to be that making changes directly on the master branch is a bad idea. One of our co-workers is quite happy making changes directly on the master branch, and despite several conversations, they seem not likely to change this. At this point in time, I can't convince a co-worker that is a bad practice to work directly on master, but I would like to understand the things that will conflict with his way of working, to know when I need to revisit this issue.
There are several problems when commits are directly pushed to master If you push a work-in-progress state to remote, the master is potentially broken If another developer starts work for a new feature from master, she starts with a potentially broken state. This slows down development Different features/bugfixes are not isolated, so that the complexity of all ongoing development tasks is combined in one branch. This increases the amount of communication necessary between all developers You cannot do pull requests which are very good mechanism for code reviews You cannot squash commits/change git history in general, as other developers might already have pulled the master branch in the meantime
{ "source": [ "https://softwareengineering.stackexchange.com/questions/335654", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/253063/" ] }
335,783
I have read a book called Clean Code by Robert C. Martin. In this book I've seen many methods to clean up code like writing small functions, choosing names carefully, etc. It seems by far the most interesting book about clean code I've read. However, today my boss didn't like the way I wrote code after reading this book. His arguments were Writing small functions is a pain because it forces you to move into each small function to see what the code is doing. Put everything in a main big loop even if the main loop is more than 300 lines, it is faster to read. Only write small functions if you have to duplicate code. Don't write a function with the name of the comment, put your complex line of code (3-4 lines) with a comment above; similarly you can modify the failing code directly This is against everything I've read. How do you usually write code? One main big loop, no small functions? The language I use is mainly Javascript. I really have difficulties reading now since I've deleted all my small clearly named functions and put everything in a big loop. However, my boss likes it this way. One example was: // The way I would write it if (isApplicationInProduction(headers)) { phoneNumber = headers.resourceId; } else { phoneNumber = DEV_PHONE_NUMBER; } function isApplicationInProduction(headers) { return _.has(headers, 'resourceId'); } // The way he would write it // Take the right resourceId if application is in production phoneNumber = headers.resourceId ? headers.resourceId : DEV_PHONE_NUMBER; In the book I've read for example comments are considered as failure to write clean code because they are obsolete if you write small functions and often leads to non-updated comments (you modify your code and not the comment). However what I do is delete the comment and write a function with the name of the comment. Well, I would like some advice, which way/practice is better to write clean code?
Taking the code examples first. You favour: if (isApplicationInProduction(headers)) { phoneNumber = headers.resourceId; } else { phoneNumber = DEV_PHONE_NUMBER; } function isApplicationInProduction(headers) { return _.has(headers, 'resourceId'); } And your boss would write it as: // Take the right resourceId if application is in production phoneNumber = headers.resourceId ? headers.resourceId : DEV_PHONE_NUMBER; In my view, both have problems. As I read your code, my immediate thought was "you can replace that if with a ternary expression". Then I read your boss' code and thought "why's he replaced your function with a comment?". I'd suggest the optimal code is between the two: phoneNumber = isApplicationInProduction(headers) ? headers.resourceId : DEV_PHONE_NUMBER; function isApplicationInProduction(headers) { return _.has(headers, 'resourceId'); } That gives you the best of both worlds: a simplified test expression and the comment is replaced with testable code. Regarding your boss' views on code design though: Writing small functions is a pain because it forces you to move into each small functions to see what the code is doing. If the function is well-named, this isn't the case. isApplicationInProduction is self-evident and it should not be necessary to examine the code to see what it does. In fact the opposite is true: examining the code reveals less as to the intention than the function name does (which is why your boss has to resort to comments). Put everything in a main big loop even if the main loop is more than 300 lines, it is faster to read It may be faster to scan through, but to truly "read" the code, you need to be able to effectively execute it in your head. That's easy with small functions and is really, really hard with methods that are 100's of lines long. Write only small functions if you have to duplicate code I disagree. As your code example shows, small, well-named functions improve readability of code and should be used whenever eg you aren't interested in the "how", only the "what" of a piece of functionality. Don't write a function with the name of the comment, put your complex line of code (3-4 lines) with a comment above. Like this you can modify the failing code directly I really can't understand the reasoning behind this one, assuming it really is serious. It's the sort of thing I'd expect to see written in parody by The Expert Beginner twitter account. Comments have a fundamental flaw: they aren't compiled/interpreted and so can't be unit tested. The code gets modified and the comment gets left alone and you end up not knowing which is right. Writing self-documenting code is hard, and supplementary docs (even in the form of comments) are sometimes needed. But "Uncle Bob"'s view that comments are a coding failure holds true all too often. Get your boss to read the Clean Code book and try to resist making your code less readable just to satisfy him. Ultimately though, if you can't persuade him to change, you have to either fall in line or find a new boss that can code better.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/335783", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/253246/" ] }
335,891
Ever since I first learned about the Gang of Four (GoF) design patterns , at least 10 years ago, I am having the impression that these 23 patterns should be only a small sample of something much larger which I like to call the Pattern Space . This hypothetical Pattern Space consists of all recommendable solutions (known or unknown) for common object oriented software design problems. So I expected the number of known and documented design patterns to grow significantly. It did not happen. More than 20 years after the GoF book came out, only 12 additional patterns are listed in the Wikipedia article, most of which are much less popular than the original ones. (I did not include the concurrency patterns here because they cover a specific topic.) What are the reasons? Is the GoF set of patterns actually more comprehensive than I think? Did the interest in finding new patterns drop, maybe because they have been found to not be all that useful in software design? Something else?
When the Book came out, a lot of people thought that way, and there were many efforts to create "pattern libraries" or even "pattern communities." You can still find some of them: "Pattern Community" , WikiWikiWeb "The Design Patterns Study Group of New York City" , Industrial Logic "Patterns Catalog" , The Hillside Group But then... Did the interest in finding new patterns drop, maybe because they are not really that useful in software design? This, very much. The point of design patterns is improve communication between developers, but if you try to add more patterns you quickly get to the point where people can't remember them, or misremember them, or disagree on what exactly they should look like, and communication is not, in fact, improved. That already happens a lot with the GoF patterns. Personally, I'd go even further: Software design, especially good software design, is far too varied to be meaningfully captured in patterns, especially in the small number of patterns people can actually remember – and they’re far too abstract for people to really remember more than a handful. So they’re not helping much. And far too many people become enamoured with the concept and try to apply patterns everywhere – usually, in the resulting code you can’t find the actual design between all the (completely meaningless) Singletons and Abstract Factories.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/335891", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/217956/" ] }
335,940
In programming, what are the benefits of referential transparency ? RT makes one of the major differences between functional and imperative paradigms, and is often used by advocates of the functional paradigm as a clear advantage over the imperative one; but in all of their efforts, these advocates never explain why it is a benefit to me as a programmer . Sure, they'll have their academic explanations to how "pure" and "elegant" it is, but how does it make it better than a less "pure" code? How does it benefit me in my day-to-day programming? Note: This is not a duplicate of What is referential transparency? The latter addresses the topic of what is RT, while this question adressses its benefits (which may not be so intuitive).
The benefit is that pure functions make your code easier to reason about. Or, in another words, side effects increase the complexity of your code. Take an example of computeProductPrice method. A pure method would ask you for a product quantity, a currency, etc. You know that whenever the method is called with the same arguments, it will always produce the same result. You can even cache it and use the cached version. You can make it lazy and postpone its call to when you actually need it , knowing that the value won't change meanwhile. You can call the method multiple times, knowing that it won't have side effects. You can reason about the method itself in an isolation from the world, knowing that all it needs are the arguments. A non-pure method will be more complex to use and debug. Since it depends on the state of the variables other than the arguments and possibly altering them, it means that it could produce different results when called multiple times, or not have the same behavior when not called at all or called too soon or too late. Example Imagine there is a method in the framework which parses a number: decimal math.parse(string t) It doesn't have referential transparency, because it depends on: The environment variable which specifies the numbering system, that is Base 10 or something else. The variable within the math library which specifies the precision of numbers to parse. So with the value of 1 , parsing the string "12.3456" will give 12.3 . The culture, which defines the expected formatting. For instance, with fr-FR , parsing "12.345" will give 12345 , because the separation character should be , , not . Imagine how easy or difficult would it be to work with such method. With the same input, you can have radically different results depending on the moment when you call the method, because something, somewhere changed the environment variable or switched the culture or set a different precision. The non-deterministic character of the method would lead to more bugs and more debugging nightmare. Calling math.parse("12345") and obtaining 5349 as an answer since some parallel code was parsing octal numbers isn't nice. How to fix this obviously broken method? By introducing referential transparency. In other words, by getting rid of global state, and moving everything to the parameters of the method: decimal math.parse(string t, base=10, precision=20, culture=cultures.en_us) Now that the method is pure, you know that no matter when you call the method, it will always produce the same result for the same arguments.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/335940", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/224882/" ] }
336,284
I'm just starting to explore SOLID and I'm unsure if reading from files and writing to files are the same responsibility. The target is the same file type; I want to read and write .pdf's in my application. The application is in Python if that makes any difference.
The reading and writing implementation likely have a high probability of being highly cohesive. If one would change, so would the other. High cohesion is a strong indication of a Single Responsibility and the Single Responsibility Principle tells us that they should be put together in the same class. If those operations have a low cohesion, chances are that splitting them improves maintainability. If, however, there are consumers that only read data without writing, or only write without reading, it is a indication that from an interface perspective you should separate these operations, as prescribed by the Interface Segregation Principle. This means that the consumers should define two interfaces that they can depend on, while the File class will implement both interfaces.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/336284", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/253260/" ] }
336,430
I've worked in shops that produce life critical software and I've dealt with commenting rules that were meant to keep the code readable and potentially save lives. In my experience though the requirement becomes a brain dead chore to be ticked off of a checklist and doesn't help me stay focused on writing understandable code. It also distracts my peer reviewer from having a more meaningful conversation with me about how to make the code easier to understand. I've also graded student code that had no comments and seen why they should be marked down for neglecting them. I understand that using good names, keeping structures simple, functions short, and modules focused will keep the code understandable enough that comments can be minimized. I also understand that comments should explain why the code does what it does, not how. Given all this is it even possible to write good coding standards that capture this idea? Ones that will be relevant in a peer review but won't turn into a mindless checklist activity that produces notes no more helpful than: "You forgot to comment on line 42". An example of the kind of code this rule might require when treated as a line in a checklist: /* Display an error message */ function display_error_message( $error_message ) { /* Display the error message */ echo $error_message; /* Exit the application */ exit(); } /* -------------------------------------------------------------------- */ /* Check if the configuration file does not exist, then display an error */ /* message */ if ( !file_exists( 'C:/xampp/htdocs/essentials/configuration.ini' ) ) { /* Display an error message */ display_error_message( 'Error: Configuration file not found. Application has stopped'); } If it is possible to express this properly in a standards document, and it might simply not be, I'd like to capture an idea along these lines: Consider a comment for every line, sequence, statement, section, structure, function, method, class, package, component, ... of code. Next consider renaming and simplifying to eliminate any need for that comment so you can delete it. Check in while comments are rare. Repeat until deadline. Then repeat some more
Michael Durrant's answer is IMHO not bad, but it is not literally answering the question (as he admitted by himself), so I'll try to give an answer which does: I also understand that comments should explain why the code does what it does, not how. Given all this is it even possible to write good coding standards that capture this idea? Obviously you can write a checklist for your code reviews , containing questions like "if there is a comment, does it explain why the code does what it does?" "is each line of the code - in its context - either self-explanatory enough that it does not need a comment, or if not, is it accompanied by a comment which closes that gap? Or (preferably) can the code be changed so it does not need a comment any more?" If you like, you can call this a "coding standard" (or not, if you think the term coding standard should be reserved for a list of braindead rules, easy to answer if they are fulfilled or not, how the formatting of the code should look like, or where to declare variables). No one hinders you to have the focus of your checklist on semantical questions, instead of going the easy route and put only formal rules into it. At the end of the day, you should be sure your team needs such a checklist. To apply such questions you need programmers with some experience as reviewers, and you need to find out if it will really improve the readability of the code in a team of experts. But that is something you have to work out together with your team.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/336430", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/131624/" ] }
336,528
We're beginning to run into a problem as we get bigger, where features make it to staging for testing, but by the time everything is tested and approved new features are on staging for test. This is creating an environment where we can almost never push to production because we have a combination of tested and untested features. I'm sure this is a common problem, but I haven't found any good resources for us yet. Some Specifics: GIT on BitBucket Jenkins for scripted deployment to Azure What I'm hoping for is a way to isolate features as they move through environments and only push what's ready to prod.
It sounds like you have a few problems here: 1. Identifying features for a specific release This is a project management issue, and a coordination issue. Will this feature be released before, at the same time as, or after this other feature? If releases want to happen one feature at a time, then identify that. If features are going to be grouped into releases, then figure out what the groupings are, and enforce it with the devs and the decision-makers. Use your issue tracking or ticketing system to tag releases. Make it clear that if one feature of a specific release is a no-go, then all of them are. 2. Branching strategies Git-flow is the easy answer for issues like these, and often people use a variant of git-flow even if they don't know what it is. I'm not going to say that it's a catch-all for all problems, but it helps a lot. It sounds like you're running into an issue with non-deterministic release strategies, where features are approved scattershot and something that started development a long time ago might be released after something that started more recently - leap-frog features. Long-lived feature branches or simultaneous release branches are probably the best answer for these kinds of issues. Merge (or rebase, if you're comfortable with it) the latest from master into your long-running branches. Be careful to only merge in features that are already live, otherwise you'll run into the issues that you've been having now (too many mixed up features on one branch). "Hotfix" or "bugfix" branches are an essential part of this process; use them for small one-off fixes that have a short QA cycle. From your description, it might even be better to not maintain an offical 'development' branch. Rather, branch all features off of master, and create merged release branches once a release is identified. 3. Environments Don't match up git branches to your environments, except for production == master. The 'development' branch should be assumed broken. Release branches are pushed to test environments, whether that's a QA environment or a staging envirement. If you need to, push a specific feature branch to an environment. If you have more than one feature branch that need to be released separately but are being tested at the same time..... ¯\_(ツ)_/¯ .... spin up another server? Maybe merge them together into a throw-away branch... commit fixes/changes to the original branches and re-merge into the throw-away branch; do final approval and UAT on individual release branches. 4. Removing non-approved features from a branch This is what the above thoughts are trying to avoid, because this is without a doubt the most painful thing to try and do. If you're lucky, features have been merged into your development or test branches atomically using merge commits. If you're unlucky, devs have committed directly to the development/test branch. Either way, if you're preparing for a release and have unapproved changes, you'll need to use Git to back out those unapproved commits from the release branch; the best idea is to do that before testing the release. Best of luck.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/336528", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/62802/" ] }