source_id
int64
1
4.64M
question
stringlengths
0
28.4k
response
stringlengths
0
28.8k
metadata
dict
347,664
I have a method that I want to write unit tests for. I'm going to keep it fairly generic as I don't want to discuss the implementation of the method, just the testing of it. The method is: public void HandleItem(item a) { CreateNewItem(); UpdateStatusOnPreviousItem(); SetNextRunDate(); } So this class has one public method that then calls some private methods to perform the logic. So when writing the unit test I want to check all three things have been done. As they are all called in the same run I thought that I could do it as one test: public void GivenItem_WhenRun_Thenxxxxx { HandleItem(item); // Assert item has been created // Assert status has been set on the previous item // Assert run date has been set } But I thought I could also write it as three separate tests: public void GivenItem_WhenRun_ThenItemIsCreated() { HandleItem(item); } public void GivenItem_WhenRun_ThenStatusIsUpdatedOnPreviousItem() { HandleItem(item); } public void GivenItem_WhenRun_ThenRunDateIsSet() { HandleItem(item); } So to me this seems nicer as it's essentially listing requirements, but then all three are related and do require exactly the same work performed on the tested method, so am running the same code 3 times. Is there a recommended approach to take with this? Thanks
There is a subtle difference between both approaches. In the first case, when the first Assert fails, the other two are not run any more. In the second case, all three tests are always run, even if one fails. Depending on the nature of the tested functionality, this might fit or not fit well to your case: if it makes sense to run the three asserts independently from another, because when one fails, the other two might still not fail, then the second approach has the advantage you get the full test results for all 3 tests in one run. This can be beneficial if you have noteable build times, since it gives you a chance to fix up to 3 errors at once before doing the next build. if, however, a failure of the first test will always imply the other two tests will also fail, then it is probably better to use the first approach (since it does not make much sense to run a test if you already know beforehand it will fail).
{ "source": [ "https://softwareengineering.stackexchange.com/questions/347664", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/230999/" ] }
347,697
Does / why does Java need to have void methods? Reference : Any method declared void doesn't return a value. As far as I can think, every use of void would be better served by returning a status flag, the object being invoked, or null . This would make every call a statement that is assignable, and would facilitate builder patterns and method chaining. Methods that are only invoked for their effects would usually return a boolean or a generic Success type or throw an exception on failure.
Because C has a void type, and Java was designed to follow many of the conventions of the C language family. There are many functions that you don't want to have return a value. What are you going to do with "a generic Success type" anyway? In fact, return values to indicate success are even less important in Java than in C, because Java has exceptions to indicate failure and C doesn't.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/347697", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/270504/" ] }
347,748
I'm working on a problem that requires a "sub-linear" solution. A quick search for sub-linear will return a lot of this... ... where the sub-linear line is modelled as logarithmic /asymptotic. But I had come to understanding that sub-linear was anything that remained below the linear baseline, as the both tended towards infinity. In this plot... ... the sub-linear result is still "linear-looking" (i.e. y = mx + b ), but falls below the linear baseline. So which is it? Does it have to be asymptotic/logarithmic? Or just trending away from the linear baseline solution?
No, it cannot be. It looks like it can from this graph because it is a log-log plot, which means both the x and y axes are compressed. Any function which satisfies the relation y = a*x^c for some constants a and c will appear as a straight line in a log-log plot. So the simple answer is the "sub-linear" case is not a straight line. This is evident from the legend too. The sub-linear case is labeled O(N^0.78). In a log-log plot, that would appear as a straight line with a slope of 0.78. However, it looks like this compared to a straight line in a regular plot: To be clear, it does not need to be logarithmic as you ask in the question. Any curve which grows slower than a straight line in the asymptotic case is sub-linear. A logarithmic curve is just an example.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/347748", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/208530/" ] }
347,852
I used to code in Python a lot. Now, for work reasons, I code in Java. The projects I do are rather small, and possibly Python would work better, but there are valid non-engineering reasons to use Java (I can't go into details). Java syntax is no issue; it is just another language. But apart from the syntax, Java has a culture, a set of development methods, and practices that is considered "correct". And for now I am completely failing to "grok" that culture. So I would really appreciate explanations or pointers in the right direction. A minimal complete example is available in a Stack Overflow question that I started: https://stackoverflow.com/questions/43619566/returning-a-result-with-several-values-the-java-way/43620339 I have a task - parse (from a single string) and handle a set of three values. In Python it is a one-liner (tuple), in Pascal or C a 5-liner record/struct. According to the answers, the equivalent of a struct is available in Java syntax and a triple is available in a widely-used Apache library - yet the "correct" way of doing it is actually by creating a separate class for the value, complete with getters and setters. Someone was very kind to provide a complete example. It was 47 lines of code (well, some of these lines were blanks). I understand that a huge development community is likely not "wrong". So this is a problem with my understanding. Python practices optimize for readability (which, in that philosophy, leads to maintainability) and after that, development speed. C practices optimize for resource usage. What do Java practices optimize for? My best guess is scalability (everything should be in a state ready for a millions-LOC project), but it is a very weak guess.
The Java Language I believe all these answers are missing the point by trying to ascribe intent to the way Java works. Java's verbosity does not stem from it being object oriented, as Python and many other languages are too yet have terser syntax. Java's verbosity doesn't come from its support of access modifiers either. Instead, it's simply how Java was designed and has evolved. Java was originally created as a slightly improved C with OO. As such Java has 70s-era syntax. Furthermore, Java is very conservative about adding features in order to retain backward compatibility and to allow it to stand the test of time. Had Java added trendy features like XML literals in 2005 when XML was all the rage the language would have been bloated with ghost features that nobody cares about and that limit its evolution 10 years later. Therefore Java simply lacks a lot of modern syntax to express concepts tersely. However, there's nothing fundamental preventing Java from adopting that syntax. For example, Java 8 added lambdas and method references, greatly reducing verbosity in many situations. Java could similarly add support for compact data type declarations such as Scala's case classes. But Java simply hasn't done so. Do note that custom value types are on the horizon and this feature may introduce a new syntax for declaring them. I suppose we will see. The Java Culture The history of enterprise Java development has largely led us to the culture we see today. In the late 90s/early 00s, Java became an extremely popular language for server-side business applications. Back then those applications were largely written ad-hoc and incorporated many complex concerns, such as HTTP APIs, databases, and processing XML feeds. In the 00s it became clear that many of these applications had a lot in common and frameworks to manage these concerns, like the Hibernate ORM, the Xerces XML parser, JSPs and the servlet API, and EJB, became popular. However, while these frameworks reduced the effort to work in the particular domain that they set to automate, they required configuration and coordination. At the time, for whatever reason, it was popular to write frameworks to cater to the most complex use case and therefore these libraries were complicated to set up and integrate. And over time they grew increasingly complex as they accumulated features. Java enterprise development gradually became more and more about plugging together third party libraries and less about writing algorithms. Eventually the tedious configuration and management of enterprise tools became painful enough that frameworks, most notably the Spring framework, came along to manage the management. You could put all your configuration in one place, the theory went, and the configuration tool would then configure the pieces and wire them together. Unfortunately these "framework frameworks" added more abstraction and complexity on top of the whole ball of wax. Over the past few years more lightweight libraries have grown in popularity. Nonetheless an entire generation of Java programmers came of age during the growth of heavy enterprise frameworks. Their role models, those developing the frameworks, wrote factory factories and proxy configuration bean loaders. They had to configure and integrate these monstrosities day-to-day. And as a result the culture of the community as a whole followed the example of these frameworks and tended to badly over-engineer.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/347852", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/270721/" ] }
348,102
Why do many software developers violate the open/closed principle by modifying many things like renaming functions which will break the application after upgrading? This question jumps to my head after the fast and the continuous versions in the React library. Every short period I notice many changes in syntax, component names, ...etc Example in the coming version of React : New Deprecation Warnings The biggest change is that we've extracted React.PropTypes and React.createClass into their own packages. Both are still accessible via the main React object, but using either will log a one-time deprecation warning to the console when in development mode. This will enable future code size optimizations. These warnings will not affect the behavior of your application. However, we realize they may cause some frustration, particularly if you use a testing framework that treats console.error as a failure. Are these changes considered as a violation of that principle? As a beginner to something like React , how do I learn it with these fast changes in the library (it's so frustrating)?
IMHO JacquesB's answer, though containing a lot of truth, shows a fundamental misunderstanding of the OCP. To be fair, your question already expresses this misunderstanding, too - renaming functions breaks backwards compatibility , but not the OCP. If breaking compatibility seems necessary (or maintaining two versions of the same component to not break compatibility), the OCP was already broken before! As Jörg W Mittag already mentioned in his comments, the principle does not say "you can't modify the behavior of a component" - it says, one should try to design components in a way they are open for beeing reused (or extended) in several ways, without the need for modification. This can be done by providing the right "extension points", or, as mentioned by @AntP, "by decomposing a class/function structure to the point where every natural extension point is there by default." IMHO following the OCP has nothing in common with "keeping the old version around unchanged for backwards compatibility" ! Or, quoting @DerekElkin's comment below: The OCP is advice on how to write a module [...], not about implementing a change management process that never allows modules to change. Good programmers use their experience to design components with the "right" extension points in mind (or - even better - in a way no artificial extension points are needed). However, to do this correctly and without unnecessary overengineering, you need to know beforehand how future use cases of your component might look like. Even experienced programmers can't look into the future and know all upcoming requirements beforehand. And that is why sometimes backwards compatibility needs to be violated - no matter how many extension points your component has, or how well it follows the OCP in respect to certain types of requirements, there will always be a requirement which cannot be implemented easily without modifying the component.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/348102", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/26930/" ] }
348,109
I am fairly new to the microservice architecture. I am trying to build a web application in this kind of way. After some research online about microservices and having experience with Spring and Angular I want to build a web application with a separate front-end and multiple (separate) back-end REST API's. How i want to design my application is a javascript/html/css front end (developed by me, no external clients). The front end will communicate via an API gateway (Reverse proxy) to my backend REST API's. The REST API's will be different microservices. I want the user to sign in once (SSO) to be authenticated and authorized for the REST API's. Some API's will be public, but some must be protected. Also this could be on different levels (admin and user API's). Front end -> API Gateway --> Authorization server |-> User REST API (User and Admin) |-> Statistics REST API (User and Admin) |-> Admin user overview REST API (Admin only) The thing that worries me is the user authentication/authorization. (Its a big deal in my app). This is where i got lost. I did some research online for this and there are a lot of people recommending Oauth2 for this. Altough i tought Oaut2 is for application to application authorization there seems to be a authorization flow for javascript based web apps (The implicit flow). I want the user to log in in the front end, no social authentication or anything. Complete own developed system. It is actually language agnostic (Almost each language can implement OAuth2). I am ooking for the best way to implement security authentication and authorization. But i was wondering is OAuth2 is the right thing for this architecture. Is the Oauth2 implicit flow good enough for my own developed front end? Or could/should i use the password flow? If Oauth is not suitable for this goal, what should be used then? I read about JWT but i am not sure if that is what i need. And if Oauth2 is suitable for my situation, should i use the Implicit or Password flow? I hope you guys can help me decide and explain things a bit. Thanks!
IMHO JacquesB's answer, though containing a lot of truth, shows a fundamental misunderstanding of the OCP. To be fair, your question already expresses this misunderstanding, too - renaming functions breaks backwards compatibility , but not the OCP. If breaking compatibility seems necessary (or maintaining two versions of the same component to not break compatibility), the OCP was already broken before! As Jörg W Mittag already mentioned in his comments, the principle does not say "you can't modify the behavior of a component" - it says, one should try to design components in a way they are open for beeing reused (or extended) in several ways, without the need for modification. This can be done by providing the right "extension points", or, as mentioned by @AntP, "by decomposing a class/function structure to the point where every natural extension point is there by default." IMHO following the OCP has nothing in common with "keeping the old version around unchanged for backwards compatibility" ! Or, quoting @DerekElkin's comment below: The OCP is advice on how to write a module [...], not about implementing a change management process that never allows modules to change. Good programmers use their experience to design components with the "right" extension points in mind (or - even better - in a way no artificial extension points are needed). However, to do this correctly and without unnecessary overengineering, you need to know beforehand how future use cases of your component might look like. Even experienced programmers can't look into the future and know all upcoming requirements beforehand. And that is why sometimes backwards compatibility needs to be violated - no matter how many extension points your component has, or how well it follows the OCP in respect to certain types of requirements, there will always be a requirement which cannot be implemented easily without modifying the component.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/348109", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/271117/" ] }
348,295
I have been tasked with writing unit tests for an existing application. After finishing my first file, I have 717 lines of test code for 419 lines of original code. Is this ratio going to become unmanageable as we increase our code coverage? My understanding of unit testing was to test each method in the class to ensure that every method worked as expected. However, in the pull request my tech lead noted that I should focus on higher level testing. He suggested testing 4-5 use cases that are most commonly used with the class in question, rather than exhaustively testing each function. I trust my tech lead's comment. He has more experience than I do, and he has better instincts when it comes to designing software. But how does a multi-person team write tests for such an ambiguous standard; that is, how do I know my peers and I share the same idea for "most common use cases"? To me, 100% unit test coverage is a lofty goal, but even if we only reached 50%, we would know that 100% of that 50% was covered. Otherwise, writing tests for part of each file leaves a lot of room to cheat.
Yes, with 100% coverage you will write some tests you don't need. Unfortunately, the only reliable way to determine which tests you don't need is to write all of them, then wait 10 years or so to see which ones never failed. Maintaining a lot of tests is not usually problematic. Many teams have automated integration and system tests on top of 100% unit test coverage. However, you are not in a test maintenance phase, you are playing catch up. It's a lot better to have 100% of your classes at 50% test coverage than 50% of your classes at 100% test coverage, and your lead seems to be trying to get you to allocate your time accordingly. After you have that baseline, then the next step is usually pushing for 100% in files that are changed going forward.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/348295", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/236972/" ] }
348,337
I have been adapting poor-man's CQRS 1 for quite some time now because I love its flexibility to have granular data in one data store, providing great possibilities for analysis and thus increasing business value and when needed another for reads containing denormalized data for increased performance. But unfortunately pretty much from the beginning I have been struggling with the problem where exactly I should place business logic in this type of architecture. From what I understand, a command is a mean to communicate intent and does not have ties to a domain by itself. They are basically data (dumb - if you wish) transfer objects. This is to make commands easily transferable between different technologies. Same applies to events as responses to successfully completed events. In a typical DDD application the business logic resides within entities, value objects, aggregate roots, they are rich in both data as well as behavior. But a command is not a domain object thus it should not be limited to domain representations of data, because that puts too much strain on them. So the real question is: Where exactly is the logic? I have found out I tend to face this struggle most often when trying to construct a quite complicated aggregate which sets some rules about combinations of its values. Also, when modeling domain objects I like to follow the fail-fast paradigm, knowing when an object reaches a method it's in a valid state. Let's say an aggregate Car uses two components: Transmission , Engine . Both Transmission and Engine value objects are represented as super types and have according sub types, Automatic and Manual transmissions, or Petrol and Electric engines respectively. In this domain, living on its own a successfully created Transmission , be it Automatic or Manual , or either type of an Engine is completely fine. But the Car aggregate introduces a few new rules, applicable only when Transmission and Engine objects are used in the same context. Namely: When a car uses Electric engine the only allowed transmission type is Automatic . When a car uses Petrol engine it may have either type of Transmission . I could catch this component combination violation at the level of creating a command, but as I have stated before, from what I understand that should not be done because the command would then contain business logic which should be limited to the domain layer. One of the options is to move this business logic validation to command validator itself, but this does not seem to be right either. It feels like I would be deconstructing the command, checking its properties retrieved using getters and comparing them within the validator and inspecting results. That screams like a violation of the law of Demeter to me. Discarding the mentioned validation option because it does not seem viable, it seems like one should use the command and construct the aggregate from it. But where should this logic exist? Should it be within the command handler responsible for handling a concrete command? Or should it perhaps be within the command validator (I don't like this approach either)? I am currently using a command and create an aggregate from it within the responsible command handler. But when I do this, should I have a command validator it would not contain anything at all, because should the CreateCar command exist it would then contain components which I know are valid on separate cases but the aggregate might say different. Let's imagine a different scenario mixing different validation processes - creating a new user using a CreateUser command. The command contains an Id of a users which will have been created and their Email . The system states the following rules for user's email address: must be unique, must not be empty, must have at most 100 characters (max length of a db column). In this case, even though having a unique email is a business rule, checking it in an aggregate makes very little sense, because I would need to load the entire set of current emails in the system to a memory and check the email in the command against the aggregate ( Eeeek! Something, something, performance.). Because of that, I would move this check to the command validator, which would take UserRepository as a dependency and use the repository to check whether a user with the email present in the command already exists. When it comes to this it suddenly makes sense to put the other two email rules in the command validator as well. But I have a feeling the rules should be really present within a User aggregate and that the command validator should only check about the uniqueness and if validation succeeds I should proceed to create the User aggregate in the CreateUserCommandHandler and pass it to a repository to be saved. I feel like this because the repository's save method is likely to accept an aggregate which ensures that once the aggregate is passed all invariants are fulfilled. When the logic (e.g. the non-emptiness) is only present within the command validation itself another programmer could completely skip this validation and call the save method in the UserRepository with a User object directly which could lead to a fatal database error, because the email might have been too long. How do you personally handle these complex validations and transformations? I am mostly happy with my solution, but I feel like I need affirmation that my ideas and approaches are not completely stupid to be pretty happy with the choices. I am entirely open to completely different approaches. If you have something you have personally tried and worked very well for you I would love to see your solution. 1 Working as a PHP developer responsible for creating RESTful systems my interpretation of CQRS deviates a little from the standard async-command-processing approach, such as sometimes returning results from commands due to the need of processing commands synchronously.
The following answer is in the context of the CQRS style promoted by the cqrs.nu in which commands arrive directly on the aggregates. In this architectural style the application services are being replaced by an infrastructure component (the CommandDispatcher ) that identifies the aggregate, loads it, sends it the command and then persists the aggregate (as a series of events if Event sourcing is used). So the real question is: Where exactly is the logic? There are multiple kinds of (validation) logic. The general idea is to execute the logic as early as possible - fail fast if you want. So, the situations are as follows: the structure of the command object itself; the command's constructor has some required fields that must be present for the command to be created; this is the first and fastest validation; this is obviously contained in the command. low level field validation, like the non-emptiness of some fields (like the username) or the format (a valid email address). This kind of validation should be contained inside the command itself, in the constructor. There is another style of having an isValid method but this seems pointless to me as someone would have to remember to call this method when in fact successful command instantiation should suffice. separate command validators , classes that have the responsibility to validated a command. I use this kind of validation when I need to check information from multiple aggregates or external sources. You could use this to check the uniqueness of an username. Command validators could have any dependencies injected, like repositories. Keep in mind that this validation is eventually consistent with the aggregate (i.e. when the user gets created, another user with the same username could be created in the meantime)! Also, do not try to put here logic that should reside inside the aggregate! Command validators are different from the Sagas/Process managers which generate commands based on events. the aggregate methods that receive and process the commands. This is the last (kind of) validation that occurs. The aggregate extract the data from the command and using some core business logic it accepts (it performs changes to it's state) or rejects it. This logic is checked in a strong consistent manner. This is the last line of defense. In your example, the rule When a car uses Electric engine the only allowed transmission type is Automatic should be checked here. I feel like this because the repository's save method is likely to accept an aggregate which ensures that once the aggregate is passed all invariants are fulfilled. When the logic (e.g. the non-emptiness) is only present within the command validation itself another programmer could completely skip this validation and call the save method in the UserRepository with a User object directly which could lead to a fatal database error, because the email might have been too long. Using the above techniques nobody can create invalid commands or bypass the logic inside the aggregates. Command validators are automatically loaded+called by the CommandDispatcher so nobody can send a command directly to the aggregate. One could call a method on the aggregate passing a command but could not persist the changes so it would be pointless/harmless to do so. Working as a PHP developer responsible for creating RESTful systems my interpretation of CQRS deviates a little from the standard async-command-processing approach, such as sometimes returning results from commands due to the need of processing commands synchronously. I'm also a PHP programmer and I don't return anything from my command handlers (aggregate methods in the form handleSomeCommand ). I do, however, quite often, return information to the client/browser in the HTTP response , for example the ID of the newly created aggregate root or something from a read-model but I never return (really never ) anything from my aggregate command methods. The simple fact that the command was accepted (and processed - we are talking about synchronous PHP processing, right?!) is sufficient. We return something to the browser (and still doing CQRS by the book) because CQRS is not a high level architecture . An example of how command validators work:
{ "source": [ "https://softwareengineering.stackexchange.com/questions/348337", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/193669/" ] }
348,420
Why would someone develop his own language to use it only inside that firm when you have XY other languages that can help you with their libraries, logic etc.? Isn't it way much simpler to go with the flow with anything else rather than developing your own language?
It is much easier to understand when you realize that it is often product of long process and not someone just saying "we want to make new language". It usually starts with the idea that some problem can be solved using a simple domain-specific language. The intention is often to have non-experts use this language, so it is simple and often lacks features like strong typing and modules. So far so good. But then, people start hitting problems that cannot be solved by the language. So new "features" are slowly added to solve those problems. And as the process is slow and features infrequent, there is no motivation to design those new features properly, as long as the problems are solved. Over time, the new language gains features that turn it from a simple domain-specific language to a complex "general" purpose language, often with conflicting, confusing semantics and hard-to-follow syntax rules. And by the time people realize they created such massive beast, it is already too late to kill it and replace it with a properly designed language. There are a few languages that evolved like this that are not bound to specific companies cough JavaScript cough PHP cough .
{ "source": [ "https://softwareengineering.stackexchange.com/questions/348420", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/198652/" ] }
348,476
According to Wikipedia, the term "bootstrapping" in the context of writing compilers means this : In computer science, bootstrapping is the process of writing a compiler (or assembler) in the source programming language that it intends to compile. Applying this technique leads to a self-hosting compiler. And I can understand how that would work. However, the story seems to be a little different for interpreters. Now, of course, it is possible to write a self-hosting interpreter. That's not what I'm asking. What I'm actually asking is: Is it possible to make a self-hosted interpreter independent of the original, first interpreter . To explain what I mean, consider this example: You write your first interpreter version in language X , and the interpreter is for a new language you're creating, called Y . You first use language X 's compiler to create an executable. You now can interpret files written in your new language Y using the interpreter written in language X . Now, as far as I understand, to be able to "bootstrap" the interpreter you wrote in language X , you'd need to rewrite the interpreter in language Y . But here is the catch: even if you do rewrite the entire interpreter in language Y , you're still going to need the original interpreter you wrote in language X . Because to run the interpreter in language Y , you're going to have to interpret the source files. But what exactly is going to interpret the source files? Well, it can't be nothing, of course, so you're forced to still use the first interpreter. No matter how many new interpreters you write in language Y , you're always going to have to use the first interpreter written in X to interpret the subsequent interpreters. This seems to be a problem simply because of the nature of interpreters. However , on the flip side, This Wikipedia article on interpreters actually talks about self-hosting interpreters . Here is a small excerpt which is relevant: A self-interpreter is a programming language interpreter written in a programming language which can interpret itself; an example is a BASIC interpreter written in BASIC. Self-interpreters are related to self-hosting compilers. If no compiler exists for the language to be interpreted, creating a self-interpreter requires the implementation of the language in a host language (which may be another programming language or assembler). By having a first interpreter such as this, the system is bootstrapped and new versions of the interpreter can be developed in the language itself It's still not clear to me though, how exactly this would be done. It seems that no matter what, you're always going to be forced to use the first version of your interpreter written in the host language. Now the article mentioned above links to another article in which Wikipedia gives some examples of supposed self-hosting interpreters . Upon closer inspection though, it seems that the main "interpreting" part of many of those self-hosting interpreters (especially some of the more common ones such as PyPy or Rubinius) are actually written in other languages such as C++ or C. So is what I describe above possible? Can a self-hosted interpreter be independent of its original host? If so, how exactly would this be done?
The short answer is: you are right in your suspicion, you always need either another interpreter written in X or a compiler from Y to some other language for which you have an interpreter already. Interpreters execute, compilers only translate from one language to another, at some point in your system, there must be an interpreter … even it's just the CPU. No matter how many new interpreters you write in language Y , you're always going to have to use the first interpreter written in X to interpret the subsequent interpreters. This seems to be a problem simply because of the nature of interpreters. Correct. What you can do is write a compiler from Y to X (or another language for which you have an interpreter), and you can even do that in Y . Then you can run your Y compiler written in Y on the Y interpreter written in X (or on the Y interpreter written in Y running on the Y interpreter written in X , or on the Y interpreter written in Y running on the Y interpreter written in Y running on the Y interpreter written in X , or … ad infinitum) to compile your Y interpreter written in Y to X , so that you can then execute it on an X interpreter. That way, you have gotten rid of your Y interpreter written in X , but now you need the X interpreter (we know that we already have one, though, since otherwise we couldn't ran the X interpreter written in Y ), and you had to write a Y -to- X -compiler first. However , on the flip side, The Wikipedia article on interpreters actually talks about self-hosting interpreters. Here is a small excerpt which is relevant: A self-interpreter is a programming language interpreter written in a programming language which can interpret itself; an example is a BASIC interpreter written in BASIC. Self-interpreters are related to self-hosting compilers. If no compiler exists for the language to be interpreted, creating a self-interpreter requires the implementation of the language in a host language (which may be another programming language or assembler). By having a first interpreter such as this, the system is bootstrapped and new versions of the interpreter can be developed in the language itself It's still not clear to me though, how exactly this would be done. It seems that no matter what, you're always going to be forced to use the first version of your interpreter written in the host language. Correct. Note that the Wikipedia article explicitly says that you need a second implementation of your language, and it doesn't say that you can get rid of the first. Now the article mentioned above links to another article in which Wikipedia gives some examples of supposed self-hosting interpreters. Upon closer inspection though, it seems that the main "interpreting" part of many of those self-hosting interpreters (especially some of the more common ones such as PyPy or Rubinius) are actually written in other languages such as C++ or C. Again, correct. Those are really bad examples. Take Rubinius, for example. Yes, it's true that the Ruby part of Rubinius is self-hosted, but it is a compiler, not an interpreter: it compiles to Ruby source code to Rubinius bytecode. The interpreter part OTOH isn't self-hosted: it interprets Rubinius bytecode, but it is written in C++. So, calling Rubinius a "self-hosted interpreter" is wrong: the self-hosted part isn't an interpreter , and the interpreter part isn't self-hosted . PyPy is similar, but even more incorrect: it isn't even written in Python in the first place, it is written in RPython, which is a different language. It is syntactically similar to Python, semantically an "extended subset", but it actually is a statically-typed language roughly on the same abstraction level as Java, and its implementation is a compiler with multiple backends which compiles RPython to C source code, ECMAScript source code, CIL byte code, JVM bytecode, or Python sourcecode. So is what I describe above possible? Can a self-host interpreter be independent of its original host? If so, how exactly would this be done? No, not on its own. You would either need to keep the original interpreter or write a compiler and compile your self-interpreter. There are some meta-circular VMs, such as Klein (written in Self ) and Maxine (written in Java). Note, however, that here the definition of "meta-circular" is yet different: these VMs are not written in the language they execute: Klein executes Self bytecode but is written in Self, Maxine executes JVM bytecode but is written in Java. However, the Self / Java source code of the VM actually gets compiled to Self / JVM bytecode and then executed by the VM, so by the time the VM gets executed, it is in the language it executes. Phew. Note also that this is different from VMs such as the SqueakVM and the Jikes RVM . Jikes is written in Java, and the SqueakVM is written in Slang (a statically typed syntactic and semantic subset of Smalltalk roughly on the same abstraction level as a high-level assembler), and both get statically compiled to native code before they are run. They don't run inside of themselves. You can , however, run them on-top of themselves (or on top of another Smalltalk VM / JVM). But that is not "meta-circular" in this sense. Maxine and Klein, OTOH do run inside of themselves; they execute their own bytecode using their own implementation. This is truly mindbending! It allows some cool optimization opportunities, for example since the VM executes itself together with the user program, it can inline calls from the user program to the VM and vice versa, e.g. call to the garbage collector or the memory allocator can be inlined into user code, and reflective callbacks in the user code can be inlined into the VM. Also, all of the clever optimization tricks that modern VMs do, where they watch the executing program and optimize it depending on the actual workload and data, the VM can apply those same tricks to itself while it is executing the user program while the user program is executing the specific workload. In other words, the VM highly specialized itself for that particular program running that particular workload. However, notice I skirted around the use of the word "interpreter" above, and always used "execute"? Well, those VMs aren't built around interpreters, they are built around (JIT) compilers. There was an interpreter added to Maxine later, but you always need the compiler: you have to run the VM once on top of another VM (e.g. Oracle HotSpot in the case of Maxine), so that the VM can (JIT) compile itself. In the case of Maxine, it will JIT compile its own bootup phase, then serialize that compiled native code to a bootstrap VM image and stick a very simple bootloader in front (the only component of the VM written in C, although that's just for convenience, it could be in Java also). Now you can use Maxine to execute itself.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/348476", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/242544/" ] }
348,535
Ever tried to sum up all numbers from 1 to 2,000,000 in your favorite programming language? The result is easy to calculate manually: 2,000,001,000,000, which some 900 times larger than the maximum value of an unsigned 32bit integer. C# prints out -1453759936 - a negative value! And I guess Java does the same. That means there are some common programming languages which ignore Arithmetic Overflow by default (in C#, there are hidden options for changing that). That's a behavior which looks very risky to me, and wasn't the crash of Ariane 5 caused by such an overflow? So: what are the design decisions behind such a dangerous behavior? Edit: The first answers to this question express the excessive costs of checking. Let's execute a short C# program to test this assumption: Stopwatch watch = Stopwatch.StartNew(); checked { for (int i = 0; i < 200000; i++) { int sum = 0; for (int j = 1; j < 50000; j++) { sum += j; } } } watch.Stop(); Console.WriteLine(watch.Elapsed.TotalMilliseconds); On my machine, the checked version takes 11015ms, while the unchecked version takes 4125ms. I.e. the checking steps take almost twice as long as adding the numbers (in total 3 times the original time). But with the 10,000,000,000 repetitions, the time taken by a check is still less than 1 nanosecond. There may be situation where that is important, but for most applications, that won't matter. Edit 2: I recompiled our server application (a Windows service analyzing data received from several sensors, quite some number crunching involved) with the /p:CheckForOverflowUnderflow="false" parameter (normally, I switch the overflow check on) and deployed it on a device. Nagios monitoring shows that the average CPU load stayed at 17%. This means that the performance hit found in the made-up example above is totally irrelevant for our application.
There are 3 reasons for this: The cost of checking for overflows (for every single arithmetic operation) at run-time is excessive. The complexity of proving that an overflow check can be omitted at compile-time is excessive. In some cases (e.g. CRC calculations, big number libraries, etc) "wrap on overflow" is more convenient for programmers.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/348535", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/214847/" ] }
348,649
Which is the better name for a method that returns a boolean? IsSupportContentType or CanSupportContentType
Is vs. Can According to the Microsoft naming convention recommendations , both "Is" and "Can" are OK (and so is "Has") as a prefix for a Boolean. In plain English, "Is" would be used to identify something about the type itself, not what it can do. For example, IsFixed , IsDerivedFrom , IsNullable can all be found in CLR types and methods. In all of these cases, "Is" is followed by an adjective . Meanwhile, "can" more clearly indicates a capability, e.g. CanEdit , CanRead , CanSeek . In each of these cases, can is followed by a verb . Since "Support" is a verb, I think in your case CanSupportContentType is better. Shorter alternative On the other hand, the conventions say the prefix is optional. What's more, it's kind of cheesy to include the argument type in the method name, since a developer can see the type of the argument in intellisense. So you could just name your method Supports and define it like this: public bool Supports(System.Net.Mime.ContentType contentType) ...which is shorter and still clearly communicates the purpose. You'd call it like this: ContentType contentType = new ContentType("text/plain"); var someClass = new MediatorsClass(); bool ok = someClass.Supports(contentType); Or as a compromise maybe this is best: public bool CanSupport(System.Net.Mime.ContentType contentType)
{ "source": [ "https://softwareengineering.stackexchange.com/questions/348649", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/271876/" ] }
348,682
Today's TDWTF article starts with a confession from the author: I didn’t know what the For-Case anti-pattern was until relatively recently, when there were a spate of articles condemning it as an anti-pattern. I’m sure I’ve probably used it, at some point, but I never knew it by name . It’s thought of as a textbook antipattern that generally implies a misunderstanding of for loop, case statements, the problem being solved, or some combination of all three. He then proceeds as if the reader, naturally, knows what the For-Case anti-pattern is without any further explanation being needed. But I don't! I haven't seen the "spate of articles" that Remy talks about, and the only significant reference that I can find on Google (besides Remy's article) is a blog post by Raymond Chen about the for-if antipattern, which is apparently related. He doesn't define the "for-case anti-pattern" either, though. What is this "For-Case anti-pattern" that these guys are talking about, and what makes it an anti-pattern?
The "pattern" was introduced in an earlier Daily WTF article. The basic idea is that you have a for loop with a case inside of it that selects based on the for loop index variable. Assuming the index variable can't be changed inside the loop, (which is not always true, depending on which language you're using,) a bit of analysis demonstrates that the execution is exactly the same as if you removed the for and the case entirely and all the case blocks were simply executed sequentially.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/348682", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/74149/" ] }
348,715
Reading through a scathing article on the downsides of OOP in favour of some other paradigm I've run into an example that I can't find too much fault with. I want to be open to the author's arguments, and although I can theoretically understand their points, one example in particular I'm having a hard time trying to imagine how it would be better implemented in, say, a FP language. From: http://www.smashcompany.com/technology/object-oriented-programming-is-an-expensive-disaster-which-must-end // Consider the case where “SimpleProductManager” is a child of // “ProductManager”: public class SimpleProductManager implements ProductManager { private List products; public List getProducts() { return products; } public void increasePrice(int percentage) { if (products != null) { for (Product product : products) { double newPrice = product.getPrice().doubleValue() * (100 + percentage)/100; product.setPrice(newPrice); } } } public void setProducts(List products) { this.products = products; } } // There are 3 behaviors here: getProducts() increasePrice() setProducts() // Is there any rational reason why these 3 behaviors should be linked to // the fact that in my data hierarchy I want “SimpleProductManager” to be // a child of “ProductManager”? I can not think of any. I do not want the // behavior of my code linked together with my definition of my data-type // hierarchy, and yet in OOP I have no choice: all methods must go inside // of a class, and the class declaration is also where I declare my // data-type hierarchy: public class SimpleProductManager implements ProductManager // This is a disaster. Note that I am not looking for a rebuttal for or against the writer's arguments for "Is there any any rational reason why these 3 behaviours should be linked to the data hierarchy?". What I'm specifically asking is how would this example be modelled/programmed in a FP language (Actual code, not theoretically)?
In FP style, Product would be an immutable class, product.setPrice would not mutate a Product object but return a new object instead, and the increasePrice function would be a "standalone" function. Using a similar looking syntax like yours (C#/Java like), an equivalent function could look like this: public List increasePrice(List products, int percentage) { if (products != null) { return products.Select(product => { double newPrice = product.getPrice().doubleValue() * (100 + percentage)/100; return product.setPrice(newPrice); }); } else return null; } As you see, the core is not really different here, except the "boilerplate" code from the contrived OOP example is omitted. However, I don't see this as evidence that OOP leads to bloated code, only as evidence for the fact if one constructs a code example which is sufficiently artificial enough, it is possible to prove anything.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/348715", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/271434/" ] }
348,783
I have these two little programs: C #include <stdio.h> int main() { if (5) { printf("true\n"); } else { printf("false\n"); } return 0; } Java class type_system { public static void main(String args[]) { if (5) { System.out.println("true"); } else { System.out.println("false"); } } } which reports the error message: type_system.java:4: error: incompatible types: int cannot be converted to boolean if (5) { ^ 1 error My understanding So far, I understood this example as a demonstration of the different type systems. C is more weakly typed and allows conversion from int to boolean without errors. Java is more strongly typed and fails, because no implicit conversation is allowed. Therefore, my question: Where did I misunderstand things? What I'm not looking for My question is not related to bad coding style. I know it's bad, but I'm interested in why C does allow it and Java not. Therefore, I'm interested in the language's type system, specifically its strongness.
1. C and Java are different languages The fact that they behave differently should not be terribly surprising. 2. C is not doing any conversion from int to bool How could it? C didn't even have a true bool type to convert to until 1999 . C was created in the early 1970s, and if was part of it before it was even C, back when it was just a series of modifications to B 1 . if wasn't simply a NOP in C for nearly 30 years. It directly acted on numeric values. The verbiage in the C standard ( PDF link ), even over a decade after the introduction of bool s to C, still specifies the behavior of if (p 148) and ?: (p 100) using the terms "unequal to 0" and "equal to 0" rather than the Boolean terms "true" or "false" or something similar. Conveniently, ... 3. ...numbers just happen to be what the processor's instructions operate on. JZ and JNZ are your basic x86 assembly instructions for conditional branching. The abbreviations are " J ump if Z ero" and " J ump if N ot Z ero". The equivalents for the PDP-11, where C originated, are BEQ (" B ranch if EQ ual") and BNE (" B ranch if N ot E qual"). These instructions check if the previous operation resulted in a zero or not and jump (or not) accordingly. 4. Java has a much higher emphasis on safety than C ever did 2 And, with safety in mind, they decided that restricting if to boolean s was worth the cost (both of implementing such a restriction and the resulting opportunity costs). 1. B doesn't even have types at all. Assembly languages generally don't, either. Yet B and assembly languages manage to handle branching just fine. 2. In the words of Dennis Ritchie when describing the planned modifications to B that became C (emphasis mine): ...it seemed that a typing scheme was necessary to cope with characters and byte addressing, and to prepare for the coming floating-point hardware. Other issues, particularly type safety and interface checking, did not seem as important then as they became later.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/348783", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/202355/" ] }
348,943
Is it considered an anti pattern to hardcode SQL into an application like this: public List<int> getPersonIDs() { List<int> listPersonIDs = new List<int>(); using (SqlConnection connection = new SqlConnection( ConfigurationManager.ConnectionStrings["Connection"].ConnectionString)) using (SqlCommand command = new SqlCommand()) { command.CommandText = "select id from Person"; command.Connection = connection; connection.Open(); SqlDataReader datareader = command.ExecuteReader(); while (datareader.Read()) { listPersonIDs.Add(Convert.ToInt32(datareader["ID"])); } } return listPersonIDs; } I would normally have a repository layer etc, but I have excluded it in the code above for simplicity. I recently had some feedback from a colleague who complained that SQL was written in the source code. I did not get chance to ask why and he is now away for two weeks (maybe more). I assume that he meant either: Use LINQ or Use stored procedures for the SQL Am I correct? Is it considered an anti pattern to write SQL in the source code? We are a small team working on this project. The benefit of stored procedures I think is that SQL Developers can get involved with the development process (writing stored procedures etc). Edit The following link talks about hard coded SQL statements: https://docs.microsoft.com/en-us/sql/odbc/reference/develop-app/hard-coded-sql-statements . Is there any benefit of preparing an SQL statement?
You excluded the crucial part for simplicity. The repository is the abstraction layer for persistence. We separate out persistence into its own layer so that we can change the persistence technology more easily when we need to. Therefore, having SQL outside of the persistence layer completely foils the effort of having a separate persistence layer. As a result: SQL is fine within the persistence layer that is specific to a SQL technology (e.g. SQL is fine in a SQLCustomerRepository but not in a MongoCustomerRepository ). Outside of the persistence layer, SQL breaks your abstraction and thus is considered very bad practice (by me). As for tools like LINQ or JPQL: Those can merely abstract the flavours of SQL out there. Having LINQ-Code or JPQL queries outside of an repository breaks the persistence abstraction just as much as raw SQL would. Another huge advantage of a separate persistence layer is that it allows you to unittest your business logic code without having to set up a DB server. You get low memory-profile, fast unit tests with reproducible results across all platforms your language supports. In an MVC+Service architecture this is a simple task of mocking the repository instance, creating some mock-data in memory and define that the repository should return that mock data when a certain getter is called. You can then define test data per unittest and not worry about cleaning up the DB afterwards. Testing writes to the DB is as simple: verify that the relevant update methods on the persistence layer have been called and assert that the entities were in the correct state when that happened.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/348943", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/65549/" ] }
349,336
The Kano model of customer satisfaction defines different classes of product features. Among them are Must-be qualities: If these are not implemented the customer will not accept the product. Attractive qualities (delighters): Features that the customer often doesn't even expect in the first place but cause excitement and delight when being discovered. Attractive qualities obviously have a lot of business value. They make people buy a Ferrari for 500.000 when a used Fiat for less than 5.000 would meet all must-be requirements. However, all agile processes I know strongly favor must-be requirements. These always get the highest priority. There doesn't even seem to be a place for attractive qualities in agile. I do believe that agile processes are very useful in software development. But how can they be applied to create delighting high quality software products and not just the bare minimum that barely fulfills the must-be requirements? Addendum: As the first two answers have pointed out, it does make sense to give must-be requirements the highest priority. But do we (and the customer) really always know in advance what the must-be requirements are. I have made the experience a few times that requirements which were given a high priority in the beginning, turned out to be much less important, if not useless, later. Therefore I believe one shouldn't slavishly focus on the must-be requirements.
The formal answer is you misunderstood agile, agile does not dictate requirements, stakeholders do. The core of agile is not to carve your requirements in stone but rather have them emerge as you go, in close contact with your client, benefiting from progressive insights. But that's all theory. What you have witnessed is indeed a common trait of many software production lines that adopted an agile way of working. The trouble is, listening to the customer and swiftly responding to the customer's needs often soon ends up in not doing any thinking about the product or doing any design at all. What used to be a pro-active process fed by vision and expertise can and often will deteriorate into a passive, entirely reactive process fed by the customer's wishes. This will lead to making just the bare necessities that "will do the job". The automobile would never have been invented if manufacturers at the time would have been "agile" because all the customers were asking for was a faster horse. This does not make agile bad though. It is a bit like communism. A great idea that hardly ever works out well because people are just people, doing people things. And the method/ideology/religion lulls them into the idea that they are doing well as long as they are going through the motions and/or following the rules. [edit] Slebetman: It is ironic then that agile evolved out of the automative industry (namely Toyota). Remember the golden rule of automation? "First organize, then automate". If you automate a broken process, the best that could happen is that you accelerate everything that goes wrong. The people at Toyota were not idiots. The typical reason for adopting any new methodology is that things are not going well. Management acknowledges it, but they may not understand the core problems. So they hire this guru that gives a resiliant speech about Agile and Scrum. And everyone loves it. For their own reasons. The developers may think "Hey, this might work. We would be more involved with business issues and we could provide input for filling this backlog. This could be an oppotunity to make sales and customer service understand what we do, why it is necessary, and we would have them out of our hair while we are transparently burning down what we agreed on." No more "stop what you are doing, this needs to be done now" by some dude you do not want to put off popping up at your desk. Sales, customer service or the owner on the other hand may see it as a way to gain (back) control over this black box of a department that is presumably doing stuff that is necessary. They do not see what is happening in there but they are pretty sure the core of the problem is burried somewhere in there. So they introduce Scrum, install a product owner of their choice and all of a sudden they have all control, all the strings are in their hand. Now what?... Ehrr... The real problem is often that the shop was not organized well in the first place and this has not changed. People have been assigned resposibilities they cannot handle, or perhaps they can but Mr. Boss is constantly interfering and ruining what they did, or (most often in my experience), crucial responsibilities have not been recognized or assigned to anyone at all. Sometimes over time an informal organization will emerge in between the formal lines. This may then partly compensate for the lack of a formal structure. Some people just end up doing what they are good at, whether they have a business card to prove it or not. The blunt introduction of Agile/Scrum may ruin that instantly. Because people are now expected to play by the rules. They feel what they used to do is not appreciated, they get yellow little papers with little stories on it instead, the message will be: "whatever you were doing, no one cared". Needless to say this will not be particularly motivating on those individuals. They will at best start waiting for orders and not take any initiative anymore. So things get worse and the conclusion will be that Agile sucks. Agile does not suck, it is great for maintenance projects and can even be good for new developments if applied carefully but if the wrong people do not understand it or adopt it for the wrong reasons, it can be most destructive.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/349336", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/217956/" ] }
349,451
When reviewing code, I normally try to make specific recommendations on how to resolve the issues. But owing to the limited time one can spend for reviewing, this does not always work well. In these cases I find it more efficient if the developer comes up with a solution himself. Today I reviewed some code and found that a class was obviously not well-designed. It had a number of optional attributes that were only assigned for certain objects and left blank for others. The standard way to resolve this would be to split the class up and use inheritance. However in this specific case this solution seemed to overcomplicate things. I was not involved in the development of this software myself and am not familiar with all modules. Therefore I did not feel knowledgable enough to make a specific decision. Another typical case that I experienced many times is that I find an obviously meaningless or even misleading function, class or variable name but am not able to come up with a good name myself. So generally, as a reviewer, is it fine to say "this code is flawed because..., do it differently" or do you have to come up with a specific solution?
As a reviewer, your job is to check if a piece of code (or a document) meets certain objectives that have been agreed upon before the review. Some of these objectives will typically involve a judgement call whether the objective has been fulfilled or not. For example, the objective that code must be maintainable typically requires a judgement call. As a reviewer, it is your job to point out where the objectives have not been met and it is the job of the author to make sure that his work actually meets the objectives. In this way, it is not your job to tell how the corrections must be made. On the other hand, just telling the author "this is flawed. Fix it" does usually not lead to a positive atmosphere in the team. For a positive atmosphere, it is good to at least indicate why something is flawed in your eyes and to provide a better alternative if you have one. Besides that, if you are reviewing something that looks "wrong" but you don't really have a better alternative, then you could also leave a comment along the lines of "This code/design doesn't sit well with me, but I don't have a clear alternative. Can we discuss this?" and then try to get something better together.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/349451", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/217956/" ] }
349,546
I recently started to dive into CQRS / ES because I might need to apply it at work. It seems very promising in our case, as it would solve a lot of problems. I sketched my rough understanding on how an ES / CQRS app should look like contextualized to a simplified banking use case (withdrawing money). Just to sum up, if person A withdraws some money: a command is issued command is handed over for validation / verification an event is pushed to an event store if the validation succeeds an aggregator dequeues the event to apply modifications on the aggregate From what I understood, the event log is the source of truth, as it is the log of FACTS, we can then derive any projection out of it. Now, what I don't understand, in this grand scheme of things, is what happens in this case: rule: a balance cannot be negative person A has a balance of 100e person A issues a WithdrawCommand of 100e validation passes and MoneyWithdrewEvent of 100e event is emitted in the meantime, person A issues another WithdrawCommand of 100e the first MoneyWithdrewEvent did not get aggregated yet therefore validation passes, because the validation check against the aggregate (that has not been updated yet) MoneyWithdrewEvent of 100e is emitted another time ==> We are in an inconsistent state of a balance being at -100e and the log contains 2 MoneyWithdrewEvent As I understand there are several strategies to cope with this problem: a) put the aggregate version id along with the event in the event store so if there is a version mismatch upon modification, nothing happens b) use some locking strategies, implying that the verification layer has to somehow create one Questions related to the strategies: a) In this case, the event log is not the source of truth anymore, how to deal with it ? Also, we returned to the client OK whereas it was totally wrong to allow the withdrawal, is it better in this case to use locks ? b) Locks == deadlocks, do you have any insights about the best practices ? Overall, is my understanding correct on how to handle concurrency ? Note: I understand that the same person withdrawing two times money in such a short time window is impossible, but I took a simple example, not to get lost into details
I sketched my rough understanding on how an ES / CQRS app should look like contextualized to a simplified banking use case (withdrawing money). This is the perfect example of an event sourced application. Let's start. Every time a command is processed or retried (you will understand, be patient) the following steps are performed: the command reaches a command handler, i.e. a service in the Application layer . the command handler identifies the Aggregate and loads it from the repository (in this case the loading is performed by new -ing an Aggregate instance, fetching all the previously emitted events of this aggregate and re-applying them to the Aggregate itself; the Aggregate version is stored for later use; after the events are applied the Aggregate is in its final state - i.e. the current account balance is computed as a number) the command handler calls the appropriate method on the Aggregate , like Account::withdrawMoney(100) and collects the yielded events, i.e. MoneyWithdrewEvent(AccountId, 100) ; if there are not enough money in the account (balance < 100) then an Exception is raised and all is aborted; otherwise, the next step is performed. the command handler tries to persist the Aggregate to the repository (in this case the repository is the Event Store ); it do so by appending the new events to the Event stream if and only if the version of the Aggregate is still the one that was when the Aggregate was loaded. If the version is not the same, then the command is retried - go to step 1 . If the version is the same, then the events are appended to the Event stream and the client is provided with the Success status. This version checking is called optimistic locking and is a general locking mechanism. One other mechanism is pessimistic locking when other writings are blocked(as in not started) until the current one completes. The term Event stream is an abstraction around all the events that were emitted by the same Aggregate. You should understand that the Event store is just another kind of persistence where are stored all the changes to an Aggregate, not just the final state. a) In this case, the event log is not the source of truth anymore, how to deal with it ? Also, we returned to the client OK whereas it was totally wrong to allow the withdrawal, is it better in this case to use locks ? The Event store is always the source of truth. b) Locks == deadlocks, do you have any insights about the best practices ? By using optimistic locking you have no locks, just command retrying. Anyways, Locks != deadlocks
{ "source": [ "https://softwareengineering.stackexchange.com/questions/349546", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/273354/" ] }
349,647
As a web developer I have very little understanding of binary data. If I take the sentence "Hello world.", convert it to binary, and store it as binary in an SQL database, it seems like the 1s and 0s would take up more space than the letters. It seems to me like using letters would sort of be like using compression, where one symbol stands for multiple. But is that really how it works? Does storing plain text data take up less space than storing the equivalent message in binary?
Plaintext is binary. When you write an H to a hard drive, the write head doesn't carve two vertical lines and a horizontal line into the platter, it magnetically encodes the bits 01001000 1 into the platter. From there, it should be obvious that storing plain text data takes up exactly the same amount of space as storing binary data. But plaintext is just one 2 particular binary format Plaintext can be reversibly transformed into other binary formats. One common transformation is compression which usually results in a more compact representation, meaning fewer bits used to represent the same information. Depending on what you're using the plaintext to represent, you may be able to use different binary formats to represent the same information. This may use more space, it may use less. For example, the numbers 5 and 1234567 could be represented in plaintext using digit characters, resulting in these bit sequences on disk 3 : 00110101 00000000 00110001 00110010 00110011 00110100 00110101 00110110 00110111 00000000 Alternatively, you could use 32-bit two's complement : 00000000 00000000 00000000 00000101 00000000 00010010 11010110 10000111 Which is a less compact representation of 5 , but more compact representation of 1234567 . And there is a literally infinite number of other representations which would have varying levels of compactness, and flexibility, although, in practice far less than that many representations are actually used. 1 Assuming UTF-8. The exact sequence of bits for a character depends on which specific encoding you're using. 2 Or really, several formats, given the various encodings . 3 If you're wondering what those eight zeros on the ends are, well, you need some way of knowing how long the data is. The options basically boil down to a marker (I used this, via a null byte), space dedicated to storing the length (Pascal used a byte to store the length of a string), or a fixed size (used in the subsequent two's complement example).
{ "source": [ "https://softwareengineering.stackexchange.com/questions/349647", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/262023/" ] }
349,660
I'm working with an SQL database right now, and this has always made me curious, but Google searches don't turn much up: Why the strict data types? I understand why you'd have a few different data types, for example like how differentiating between binary and plain text data is important . Rather than storing the 1s and 0s of binary data as plaintext, I now understand that it's more efficient to store the binary data as its own format. But what I don't understand is what the benefit is of having so many different data types: Why mediumtext , longtext , and text ? Why decimal , float , and int ? etc. What is the benefit of telling the database "There'll only be 256 bytes of plain text data in entries to this column." or "This column can have text entries of up to 16,777,215 bytes"? Is it a performance benefit? If so, why does knowing the size of the entry before hand help performance? Or rather is it something else altogether?
SQL is a statically-typed language. This means you have to know what type a variable (or field, in this case) is before you can use it. This is the opposite of dynamically-typed languages, where that is not necessarily the case. At its core, SQL is designed to define data ( DDL ) and access data ( DML ) in a relational database engine. Static typing presents several benefits over dynamic typing to this type of system. Indexes , used for quickly accessing specific records, work really well when the size is fixed. Consider a query that utilizes an index, possibly with multiple fields: if the data types and sizes are known ahead of time, I can very quickly compare my predicate (WHERE clause or JOIN criteria) against values in the index and find the desired records faster. Consider two integer values. In a dynamic type system, they may be of variable size (think Java BigInteger , or Python's built-in arbitrary-precision integers). If I want to compare the integers, I need to know their bit length first. This is an aspect of integer comparison that is largely hidden by modern languages, but is very real at the CPU level. If the sizes are fixed and known ahead of time, an entire step is removed from the process. Again, databases are supposed to be able to process zillions of transactions as quickly as possible. Speed is king. SQL was designed back in the 1970s. In the earlier days of microcomputing, memory was at a premium. Limiting data helped keep storage requirements in check. If an integer never grows past one byte, why allocate more storage for it? That is wasted space in the era of limited memory. Even in modern times, those extra wasted bytes can add up and kill the performance of a CPU's cache. Remember, these are database engines that may be servicing hundreds of transactions per second, not just your little development environment. Along the lines of limited storage, it is helpful to be able to fit a single record in a single page in memory. Once you go over one page, there are more page misses and more slow memory access. Newer engines have optimizations to make this less of an issue, but it is still there. By sizing data appropriately, you can mitigate this risk. Moreso in modern times, SQL is used to plug in to other languages via ORM or ODBC or some other layer. Some of these languages have rules about requiring strong, static types. It is best to conform to the more strict requirements, as dynamically-typed languages can deal with static types easier than the other way around. SQL supports static typing because database engines need it for performance, as shown above. It is interesting to note that there are implementations of SQL that are not strongly-typed. SQLite is probably the most popular example of such a relational database engine. Then again, it is designed for single-threaded use on a single system, so the performance concerns may not be as pronounced as in e.g. an enterprise Oracle database servicing millions of requests per minute.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/349660", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/262023/" ] }
349,740
At one point or another you might come over functions with a lot of arguments. Sometimes it makes sense to combine some of the arguments into super-arguments. I've often done this with dicts, but now I'm looking at better ways of doing it. I'd like to turn ... def do_something(ax, ay, az, bu, bv, c): # Do something ... into ... def do_something(a, b, c): # Do something ... where a and b contain their subvariations. One way to do this is to do: A = namedtuple('A', 'x, y, z') a = A(ax, ay, az) B = namedtuple('B', 'u, v') b = B(bu, bv) However, this seems simpler: a = SimpleNamespace(x=ax, y=ay, z=az) b = SimpleNamespace(u=bu, v=bv) What is the drawback? The fact that a and b aren't well typed? They aren't A and B objects? (Btw, don't worry about the variable names. I don't normally use as short variable names.)
SimpleNamespace is basically just a nice facade on top of a dictionary. It allows you to use properties instead of index keys. This is nice as it is super flexible and easy to manipulate. The downside of that flexibility is that it doesn't provide any structure. There is nothing to stop someone from calling SimpleNamespace(x=ax, y=ay) (and del a.z at some point later). If this instance gets passed to your function, the exception occurs when you try to access the field. In contrast, namedtuple lets you create a structured type. The type will have a name and it will know what fields it is supposed to have. You won't be able to make an instance without each of those field and they can't be removed later. Additionally, the instance is immutable, so you will know that the value in a.x will always be the same. It's up to you to decide if you need the flexibility that SimpleNamespace gives you, or if you prefer to have the structure and guarantees provided by namedtuple .
{ "source": [ "https://softwareengineering.stackexchange.com/questions/349740", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/93760/" ] }
349,808
Background I'm working in a team that's looking to implement zero-downtime deployments. We're planning on using a blue/green deployment strategy in order to achieve this. One of the things I'm realising in doing the research is how complicated it becomes to make database changes. A simple operation like renaming a column can take 3 full release cycles until it's completed! It seems to me that having the full rollout of a change take multiple release cycles introduces a lot of potential for human error. In the linked article it shows that code changes are necessary for 2 releases and a database migration is needed for 3 releases. What I'm looking for Currently, if we want to remember to do something, we can create a ticket in our issue management system, which creates clutter and also might get moved to a later sprint or the backlog by management; or we can create a TODO comment, which will probably be forgotten about completely. What I'm looking for is a way that a TODO comment can have a deadline against it, and our Continuous Integration system (current undecided which we'll use) would reject the build if this deadline was expired. For example, if we rename a column we could create the initial migration for it, and then two TODO comments to ensure that the remaining two migrations are created: // TODO by v55: Create migration to move constraints to new column, remove references to old column in app // TODO by v56: Create migration to drop old column This seems fairly simple to implement, but I'm wondering if something like this already exists, because I don't want to re-invent the wheel. Additional thoughts I feel like I might be suffering from XY problem here, given that rolling deployments and blue/green deployments are considered a best-practice it seems strange that I can't find a solution for making database updates less painful. If you think I'm looking into the wrong thing entirely, please let me know in a comment! That said, the database example I gave is just one example, and I think TODO comments with due dates would be useful in other situations too, so even if I'm approaching this specific situation all wrong I'd really like to answers to my actual question too. Thanks! EDIT: I just thought of another situation where this could be helpful. If you use Feature Toggles to turn on parts of your app when they are ready, you have to be careful to clean them up, otherwise you may end up with Toggle Debt . Comments with deadlines could be a good way of remembering this.
This question is really two questions in one. Todo comments Of all the ways to track action items, this is the worst. TODO comments are good during active work or as a way of suggestion to a maintainer, "here is something that could maybe be improved on in the future". But if you rely on TODO comments for getting work done, you're doomed to fail. What to do about it TODO comments are basically technical debt, so they should be handled like any other technical debt. Either tackle them right away, if you have time, or put them in the backlog so they can be tracked and prioritized. Generally speaking, and this is totally opinionated and open for debate, TODO comments could be considered a code smell. If a TODO comment makes it as far as being checked into version control, you have to ask yourself, are you actually going to follow through on it right now? If not, that's ok. Just be honest with yourself and put it in the backlog. How you manage this backlog comes down to business process, company politics, and perhaps some personal autonomy. But you still need a tracked and prioritized backlog to make sure it happens. Database changes Yes, database changes are tricky with a zero-downtime policy. Some tricks to help make it less painful: Post-deploy process Create a post-deploy process that runs as part of the same release. However you want it to work. On the last system I worked on, I designed a 4-phase deployment: preapp database scripts web apps postapp database scripts maintenance window database scripts The idea was that wherever possible, we would put as much of the database changes into preapp as possible. Postapp was reserved for the unusual cases where we needed to make incompatible schema changes. In those cases, preapp would make enough of a change to make the new application code compatible (maybe creating a temporary view for compatibility), and postapp would cleanup any such temporary artifacts. Maintenance window phase was reserved for changes which truly required downtime or where the risk or cost of a live deployment was not worth it. For example, scripts that change massive amounts of data may need to lock an entire table. Deploy frequently If you deploy new releases frequently enough, you can reach a point where carrying a change across 2 or 3 releases is trivial. Long release cycles amplify the cost of database changes.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/349808", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/82736/" ] }
349,809
I am tasked with implementing a web-based gambling system that's operated on site. The requirements given to me by the company co-founder and designer for user login give red flags in regards to security, which surprises me because all of the other roles are required to be secure (cashier, site owner, admin, etc. all have usernames and passwords). Here is how users are supposed to log into this system: User goes to cashier, registers, and pays money for credits. Cashier creates user, gives their account credits, and gives them their user ID. The user ID is only seven characters. The user ID is only digits. The user ID is shown in plain text on the screen to the cashier. The user ID is printed on a receipt given to the user. There is no password. The user ID alone is used to log in. User goes to a terminal at a site, logs in with user ID, and plays games. The user ID is made to appear as a password because, when the player goes to a terminal to log in, they are presented on screen with a number pad and the digits they enter are masked. What are all of the ways this system can be compromised? I suggested to him that 7 digits was insecure as it could be guessed, but he said that users don't want to have to enter long numbers to log in and that there could be max 2,000 users per site.
This question is really two questions in one. Todo comments Of all the ways to track action items, this is the worst. TODO comments are good during active work or as a way of suggestion to a maintainer, "here is something that could maybe be improved on in the future". But if you rely on TODO comments for getting work done, you're doomed to fail. What to do about it TODO comments are basically technical debt, so they should be handled like any other technical debt. Either tackle them right away, if you have time, or put them in the backlog so they can be tracked and prioritized. Generally speaking, and this is totally opinionated and open for debate, TODO comments could be considered a code smell. If a TODO comment makes it as far as being checked into version control, you have to ask yourself, are you actually going to follow through on it right now? If not, that's ok. Just be honest with yourself and put it in the backlog. How you manage this backlog comes down to business process, company politics, and perhaps some personal autonomy. But you still need a tracked and prioritized backlog to make sure it happens. Database changes Yes, database changes are tricky with a zero-downtime policy. Some tricks to help make it less painful: Post-deploy process Create a post-deploy process that runs as part of the same release. However you want it to work. On the last system I worked on, I designed a 4-phase deployment: preapp database scripts web apps postapp database scripts maintenance window database scripts The idea was that wherever possible, we would put as much of the database changes into preapp as possible. Postapp was reserved for the unusual cases where we needed to make incompatible schema changes. In those cases, preapp would make enough of a change to make the new application code compatible (maybe creating a temporary view for compatibility), and postapp would cleanup any such temporary artifacts. Maintenance window phase was reserved for changes which truly required downtime or where the risk or cost of a live deployment was not worth it. For example, scripts that change massive amounts of data may need to lock an entire table. Deploy frequently If you deploy new releases frequently enough, you can reach a point where carrying a change across 2 or 3 releases is trivial. Long release cycles amplify the cost of database changes.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/349809", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/272432/" ] }
349,942
For example,I had seen some codes that create a fragment like this: Fragment myFragment=new MyFragment(); which declares a variable as Fragment instead of MyFragment , which MyFragment is a child class of Fragment. I'm not satisified this line of codes because I think this code should be: MyFragment myFragment=new MyFragment(); which is more specific, is that true? Or in generalization of the question, is it bad practice to use: Parent x=new Child(); instead of Child x=new Child(); if we can change the former one into latter one without compile error?
It depends on the context, but I would argue you should declare the most abstract type possible. That way your code will be as general as possible and not depend on irrelevant details. An example would be having a LinkedList and ArrayList which both descend from List . If the code would work equally well with any kind of list then there is no reason to arbitrary restrict it to one of the subclasses.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/349942", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/196142/" ] }
349,972
I am trying to understand, at a high-level, how single threads run across multiple cores. Below is my best understanding. I do not believe it is correct though. Based on my reading of Hyper-threading , it seems the OS organizes the instructions of all threads in such a way that they are not waiting on each other. Then the front-end of the CPU further organizes those instructions by distributing one thread to each core, and distributes independent instructions from each thread among any open cycles. So if there is only a single thread, then the OS will not do any optimization. However, the front-end of the CPU will distribute independent instruction sets among each core. According to https://stackoverflow.com/a/15936270 , a specific programming language may create more or less threads, but it is irrelevant when determining what to do with those threads. The OS and CPU handle this, so this happens regardless of the programming language used. Just to clarify, I am asking about a single thread run across multiple cores, not about running multiple threads on a single core. What is wrong with my summary? Where and how is a thread's instructions split up among multiple cores? Does the programming language matter? I know this is a broad subject; I am hoping for a high-level understanding of it.
The operating system offers time slice s of CPU to threads that are eligible to run. If there is only one core, then the operating system schedules the most eligible thread to run on that core for a time slice. After a time slice is completed, or when the running thread blocks on IO, or when the processor is interrupted by external events, the operating system reevaluates what thread to run next (and it could choose the same thread again or a different one). Eligibility to run consists of variations on fairness and priority and readiness, and by this method various threads get time slices, some more than others. If there are multiple cores, N, then the operating system schedules the most eligible N threads to run on the cores. Processor Affinity is an efficiency consideration. Each time a CPU runs a different thread than before, it tends to slow down a bit because its cache is warm for the previous thread, but cold to the new one. Thus, running the same thread on the same processor over numerous time slices is an efficiency advantage. However, the operating system is free to offer one thread time-slices on different CPUs, and it could rotate through all the CPUs on different time slices. It cannot, however, as @gnasher729 says , run one thread on multiple CPUs simultaneously. Hyperthreading is a method in hardware by which a single enhanced CPU core can support execution of two or more different threads simultaneously. (Such a CPU can offer additional threads at lower cost in silicon real-estate than additional full cores.) This enhanced CPU core needs to support additional state for the other threads, such as CPU register values, and also has coordination state & behavior that enables sharing of functional units within that CPU without conflating the threads. Hyperthreading, while technically challenging from a hardware perspective, from the programmer's perspective, the execution model is merely that of additional CPU cores rather than anything more complex. So, the operating system sees additional CPU cores, though there are some new processor affinity issues as several hyperthreaded threads are sharing one CPU core's cache architecture. We might naively think that two threads running on a hyperthreadded core each run half as fast as they would each with their own full core. But this is not necessarily the case, since a single thread's execution is full of slack cycles, and some amount of them can be used by the other hyperthreaded thread. Further, even during non-slack cycles, one thread may be using different functional units than the other so simultaneous execution can occur. The enhanced CPU for hyperthreading may have a few more of certain heavily used functional units specially to support that.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/349972", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/112298/" ] }
350,006
I've been running into code (new code) that uses what I call 'Parallel Arrays' or Lists. Meaning there are 2 arrays that contain related data and are linked by their position (index) in the array. I consider this confusing and prone to all sorts of errors. The solution I normally propose is to create an object called Company with the fields CompanyId and CompanyName. An very real example: List<string> companyNames; List<int> companyIds; //...They get populated somewhere and we then process for(var i=0; i<companyNames.Count; i++) { UpdateCompanyName(companyIds[i],companyNames[i]); } Are these parallel arrays considered bad practice ?
Here are some reasons why someone might use parrel arrays: In a language that does not support classes or structs To avoid thread locking when individual threads are only modifying one of the columns When the persistence method forces these things to be stored separately and you are reconstituting them. They can consume less memory if the structures are padded. (not applicable for these data types in C#) When parts of the data need to be kept close together to make efficient use of the CPU cache (would not be of help in the above code). Use of Single Instruction Multiple Data (SIMD) op codes. (not applicable for this code, or strings at all) I do not see any compelling reason to do this in this case... and there are likely better options in all of the above or are not so useful in a high level language.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/350006", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/147597/" ] }
350,092
I have read about authentications and become confusing about types classification. Let's start from Cookie-based authentication, If I understand it right, the key point is that all data, needed for user authentication, is stored in cookies. And this is my first confusion: in cookies we may store session id and so it becomes a Session-based authentication? claims, and so should it be called as a Claims-based authentication? I have found that some people even store JWT token in cookies, but this seems like a custom implementation of own auth flow... Now let's switch to Claims-based authentication. The main element is the claim and the collection of claims could use as container cookies (as discussed above) token (JWT as the example). From the other side, when we are talking about the token, it may contain any kind of information... Session Id for example... So what have I missed? Why don't people define something like Cookie-Session-based or Token-Claims-based authentications when talking about authentication types?
I agree that the naming of the different concepts is confusing. When talking about authentication in a web context, there are several aspects to consider. What information does the client send when authenticating? A session id . This means that the server has a session storage which contains the active sessions. Sessions are stateful on the server side. A set of claims . Claims contain information on what operations the client may perform. The server does not keep track of each authenticated client, but trusts the claims. Claims are typically stateless on the server side. How does the client send the authentication information? Cookies . Browsers send cookies automatically with each request, after the cookie has been set. Cookies are vulnerable to XSRF. Other headers . Typically, the Authorization header is used for this. These headers are not send by the browser automatically, but have to be set by the client. This is vulnerable to XSS. Request Url . The authentication information is included in the URL. This is not commonly used. What is the format of the authentication information? Plain, unsigned text . This can be used for session ids. A session id is generally not guessable by the client, so the server can trust that the client has not forged it. Json Web Token . JWTs are cryptographically signed and contain expiry information. The client can usually decode the token, but cannot alter it without the server noticing. Any other signed format . Same as JWTs. The important thing is the cryptographic signature, which prevents the client from altering the data. Bonus: How does the client store the information locally Cookies . This is of course the case when using cookies to transmit the information. But Cookies can also be used as just a client side storage mechanism. This requires the cookie to be readable from scripts to be useful. For example, a client could read the cookie with JavaScript and send the information with an Authorization-Header. Local Storage . This is often the only possible method, if cookies are unavailable. Requires management with JavaScript. What do people mean when they say... "Cookie based authentication" . I find that this usually means "Session id, send by cookie, possible as plain text." "Token based authentication" . Usually this means "Claims, send using the authentication header, encoded as a Json Web Token." "Claims based authentication" . Could be anything but a session id.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/350092", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/225156/" ] }
350,162
What is the major benefit of having pure POCO models? I get that Models should be clean and simple, but I tend to like to keep the maintenance of child objects within the model classes. For example if I have a ClassA and ClassB defined as follows: public class ClassA { public string MyProp { get; set; } public IEnumerable<ClassB> Children { get; } public void AddChild(ClassB newChild) { /*... */ } public void RemoveChild(ClassB child) { /* ... */ } } public class ClassB { public string SomeProp { get; set; } } Is there something inherently wrong with having the add and remove methods? Should I instead just expose the list and allow client code to add whatever passing the responsibility of simple data validations like not null, and not duplicate on to another class? Any help is appreciated.Thanks.
Your two questions are unrelated. What is the benefit to having pure POCO models? A pure POCO is not dependent on some enterprisy framework, convention, [] thingy, or intimately connected to some object that is similarly dependent. In other words, the world can completely change around a POCO and it just keeps on doing what it does without caring. You can't break it by updating a framework, by moving it into a new system, or looking at it funny. It just keeps working. The only dependencies are the things it explicitly asks for. POCO is nothing more than the POJO idea. The J was changed to a C because people look at you funny when you explain a concept that has Java in its name if those people are using C#. The idea is not dependent on language. They could have simply called it Plain Old Objects. But who wants to brag about using POO? I'll let Fowler explain the origins: The term was coined while Rebecca Parsons, Josh MacKenzie and I were preparing for a talk at a conference in September 2000. In the talk we were pointing out the many benefits of encoding business logic into regular java objects rather than using Entity Beans. We wondered why people were so against using regular objects in their systems and concluded that it was because simple objects lacked a fancy name. So we gave them one, and it's caught on very nicely. martinfowler.com : POJO As for your other question: Is there something inherently wrong with having the add and remove methods? I don't like A directly knowing about B . But that's a DIP thing not a POJO thing.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/350162", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/271075/" ] }
350,183
I'm making a physics engine and its becoming quite hard to keep track of the whole thing. Often when I get back to my code after a break I just don't remember why that's not working. Most of the issues aren't simple programming mistakes but design flaws in my physics engine. That's why I should just finish designing it before programming it. However, I need a way to write on paper the whole design of my physics engine. Else, I will just forget it tomorrow and be lost again. A UML class diagram is not appropriate at all for the design of a physics engine. I don't really care about the classes but the process. I do not see the Business process diagram as really useful because modelling a single step(frame) of my process won't help me understand the final behavior of my engine on many steps. So, what kind of diagram should I use to help me keep track of the process ? What kind of diagram professionnals use to make a physics engine ?
TO DO lists are wonderful things. I'm not talking about // #TODO: blah blah comments. I mean get an honest to God notebook. You never know when you'll remember something important to do. A notebook will quietly sit there and let you think without complaining about how your handwriting wont compile. Some of my best ideas happen in the bathroom (yes I do own a water proof notebook but you don't have to go that far). You can get pocket sized ones that are sewn (not glued) so they don't fall apart in your pocket. Didn't manage to get a fancy one with a built in book mark? Tape, scissors, ribbon and no one will ever know. When an idea hits just jot it down. Draw little boxes next to each idea and you can easily mark it as done. Put a box at the top of the page and you know when the page is done. What sequential access isn't good enough for you? Yeah they make pocket binders as well. This all might seem like a bit much but it's better than drowning in post it notes or trying to capture everything in Jira. Don't leave things half implemented Keep your improvements small and achievable. Don't start anything that can't be finished in one sitting. If it's to big for that then break it down into smaller steps. Always leave code that compiles and passes it's tests. Oh and don't leave passing tests you've never seen fail. Making a test both pass and fail is how you test the test. Stop thinking you need the whole design on paper What you need to do is capture your evolving plan. You don't know how things are going to look when you're done so stop pretending you do. Capture what you have figured out as well as you can. Use a napkin and crayon if you have to. Few people understand 90% of UML anyway. Use whatever way you can to show what you need to show. I focus on showing my interfaces and what knows about what. Write notes when you stop coding The moment you take your fingers off the keys is the last time you will understand what you've done (and what you have planned) as well as you do now. Capture that understanding as best you can in some notes. If all you have is comments then you're still tied to the computer and likely to leave a puddle in the chair. Again, having a notebook is an awesome thing. This way you can land your brain gracefully, save your bladder, and take off again later without resorting to caffeine and teeth gritting.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/350183", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/241858/" ] }
350,187
Generally I use private methods to encapsulate functionality that is reused in multiple places in the class. But sometimes I have a large public method that could be broken up into smaller steps, each in its own private method. This would make the public method shorter, but I'm worried that forcing anyone who reads the method to jump around to different private methods will damage readability. Is there a consensus on this? Is it better to have long public methods, or break them up into smaller pieces even if each piece is not reusable?
No, this is not a bad style. In fact it is a very good style. Private functions need not exist simply because of reusability. That is certainly one good reason to create them, but there is another: decomposition. Consider a function that does too much. It is a hundred lines long, and impossible to reason about. If you split this function into smaller pieces, it still "does" as much work as before, but in smaller pieces. It calls other functions which should have descriptive names. The main function reads almost like a book: do A, then do B, then do C, etc. The functions it calls may only be called in one place, but now they are smaller. Any particular function is necessarily sandboxed from the other functions: they have different scopes. When you decompose a large problem into smaller problems, even if those smaller problems (functions) are only used/solved once, you gain several benefits: Readability. Nobody can read a monolithic function and understand what it does completely. You can either keep lying to yourself, or split it up into bite-sized chunks that make sense. Locality of reference. It now becomes impossible to declare and use a variable, then have it stick around, and used again 100 lines later. These functions have different scopes. Testing. While it is only necessary to unit test the public members of a class, it may be desirable to test certain private members as well. If there is a critical section of a long function that might benefit from testing, it is impossible to test it independently without extracting it to a separate function. Modularity. Now that you have private functions, you may find one or more of them that could be extracted into a separate class, whether it is used only here or is reusable. To the previous point, this separate class is likely to be easier to test as well, since it will need a public interface. The idea of splitting big code into smaller pieces that are easier to understand and test is a key point of Uncle Bob's book Clean Code . At the time of writing this answer the book is nine years old, but is just as relevant today as it was back then.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/350187", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/274400/" ] }
350,213
I'm planning to design and set up a database to store dictionary entries (usually single words) and their meaning in another language. So, for example, the table Glossary must have entry and definition and each table record has a reference to the id of a record stored in Tag (Each entry must have a tag or category). Since my data has a structure, I thought using a SQL database (like MySQL) is not a bad idea; but people say MongoDB is much better for performance. At the client side, the application must be able to provide a search box with autocomplete which consumes a REST API provided by the backend. Is it safe to go with MySQL in such a scenario? or should I use MongoDB or ElasticSearch of any other solution for this? Hundred thousands of records are supposed to be stored and accessed in this way.
I can't tell you why it's a bad idea. I can tell you a bunch of reasons why a relational database is a good idea though. Remember that not everyone consults a dictionary for a definition. More times than not, a dictionary is used to find the correct spelling. This means you're not just finding a needle in a haystack , you are searching the haystack for needles that are similar to the one described by the user (if I may use an idiom). You won't just be doing primary key look-ups. You'll be doing keyword searches Words can be related, either in meaning or spelling ( read, read , red and reed ) Whenever you see the word "related" think "Relational Database" If you need speed, you need caching on top of your relational database, not a broken relational data model A properly normalized database speeds up primary key look-ups and searches since there is simply fewer bits to sift through. The people who say normalized databases are slower are referring to the 0.1% of cases where this is true. In the other 99.9% of cases they haven't actually worked with a truly normalized database to see the performance first hand, so ignore them. I have worked with a normalized database. Love it. Don't want to go back. And I'm not a database guy. I'm a C#/JavaScript/HTML/Ruby guy. Words have an origin. In fact, many words in the same language can have the same origin, which is another word in a different language. For instance, résumé (the thing we upload to recruiters websites so we can get incessant phone calls and e-mails for the next 7 years) is a French word. A dictionary also defines what kind of word it is (noun, verb, adjective ect). This isn't just a piece of text: "noun" it has meaning as well. Plus with a relational database you can say things like "give me all the nouns for the English language" and since a normalized database will be utilizing foreign keys, and foreign keys have (or should have) indexes, the lookup will be a snap. Think of how words are pronounced. In English especially, lots of words have the same pronunciation (see my example above with read and reed, or read and red). The pronunciation of a word is, itself, another word. A relational database would allow you to use foreign keys to any pronunciations. That information won't be duplicated in a relational database. It gets duplicated like crazy in a no-SQL database. And now let's talk about plural and singular versions of words. :) Think "boat" and "boats". Or the very fact that a word is "singular" or "plural". Oh! And now let's talk about past tense, present tense, future tense and present participle (to be honest, I don't know what the crap "present participle" is. I think it has something to do with words ending in "ing" in English or something). Look up "run" and you should see the other tenses: ran, runs, running In fact, "tense" is another relationship itself. English doesn't do this so much, but gender is another thing that defines a word. Languages like Spanish have suffixes the define whether the subject of the noun is male or female. If you need to fill in the blanks for a sentence, gender is extremely important in many languages. Since you can't always rely on language conventions to determine gender (in Spanish, words ending in "o" are masculine/male, but that's not true for all words), you need an identifying value: Male or Female. This is another relationship that a normalized database handles gracefully even at millions of records. With all the twisted rules and relationships between words, and even different languages, it's hard for me to imagine this data store as a "document store" like a no-SQL solution provides. There are so many and such a large variety of relationships between words and their components that a relational database is the only sensible solution.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/350213", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/274429/" ] }
350,230
I'm a Sr. front-end dev, coding in Babel ES6. Part of our app makes an API call, and based on the data model we get back from the API call, certain forms need to be filled out. Those forms are stored in a doubly-linked list (if the back-end says some of the data is invalid, we can quickly get the user back to the one page they messed up and then get them back on target, simply by modifying the list.) Anyway, there's a bunch of functions used to add pages, and I'm wondering if I'm being too clever. This is just a basic overview - the actual algorithm is much more complex, with tons of different pages and page types, but this'll give you an example. This is how, I think, a novice programmer would handle it. export const addPages = (apiData) => { let pagesList = new PagesList(); if(apiData.pages.foo){ pagesList.add('foo', apiData.pages.foo){ } if (apiData.pages.arrayOfBars){ let bars = apiData.pages.arrayOfBars; bars.forEach((bar) => { pagesList.add(bar.name, bar.data); }) } if (apiData.pages.customBazes) { let bazes = apiData.pages.customBazes; bazes.forEach((baz) => { pagesList.add(customBazParser(baz)); }) } return pagesList; } Now, in order to be more testable, I've taken all those if statements and made them separate, stand alone functions, and then I map over them. Now, testable is one thing, but so is readable and I wonder if I'm making things less readable here. // file: '../util/functor.js' export const Identity = (x) => ({ value: x, map: (f) => Identity(f(x)), }) // file 'addPages.js' import { Identity } from '../util/functor'; export const parseFoo = (data) => (list) => { list.add('foo', data); } export const parseBar = (data) => (list) => { data.forEach((bar) => { list.add(bar.name, bar.data) }); return list; } export const parseBaz = (data) => (list) => { data.forEach((baz) => { list.add(customBazParser(baz)); }) return list; } export const addPages = (apiData) => { let pagesList = new PagesList(); let { foo, arrayOfBars: bars, customBazes: bazes } = apiData.pages; let pages = Identity(pagesList); return pages.map(foo ? parseFoo(foo) : x => x) .map(bars ? parseBar(bars) : x => x) .map(bazes ? parseBaz(bazes) : x => x) .value } Here's my concern. To me the bottom is more organized. The code itself is broken into smaller chunks that are testable in isolation. BUT I'm thinking: If I had to read that as a junior developer, unused to such concepts as using Identity functors, currying, or ternary statements, would I be able to even understand what the latter solution is doing? Is it better to do things the "wrong, easier" way sometimes?
In your code, you have made multiple changes: destructuring assignment to access fields in the pages is a good change. extracting the parseFoo() functions etc. is a possibly good change. introducing a functor is … very confusing. One of the most confusing parts here is how you are mixing functional and imperative programming. With your functor you aren't really transforming data, you are using it to pass a mutable list through various functions. That doesn't seem like a very useful abstraction, we already have variables for that. The thing that should possibly have been abstracted – only parsing that item if it exists – is still there in your code explicitly, but now we have to think around the corner. For example, it's somewhat non-obvious that parseFoo(foo) will return a function. JavaScript doesn't have a static type system to notify you whether this is legal, so such code is really error prone without a better name ( makeFooParser(foo) ?). I don't see any benefit in this obfuscation. What I'd expect to see instead: if (foo) parseFoo(pages, foo); if (bars) parseBar(pages, bars); if (bazes) parseBaz(pages, bazes); return pages; But that's not ideal either, because it is not clear from the call site that the items will be added to the pages list. If instead the parsing functions are pure and return a (possibly empty) list that we can explicitly add to the pages, that might be better: pages.addAll(parseFoo(foo)); pages.addAll(parseBar(bars)); pages.addAll(parseBaz(bazes)); return pages; Added benefit: the logic about what to do when the item is empty has now been moved into the individual parsing functions. If this is not appropriate, you can still introduce conditionals. The mutability of the pages list is now pulled together into a single function, instead of spreading it across multiple calls. Avoiding non-local mutations is a far bigger part of functional programming than abstractions with funny names like Monad . So yes, your code was too clever. Please apply your cleverness not to write clever code, but to find clever ways to avoid the need for blatant cleverness. The best designs don't look fancy, but look obvious to anyone who sees them. And good abstractions are there to simplify programming, not to add extra layers that I have to untangle in my mind first (here, figuring out that the functor is equivalent to a variable, and can effectively be elided). Please: if in doubt, keep your code simple and stupid (KISS principle).
{ "source": [ "https://softwareengineering.stackexchange.com/questions/350230", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/274462/" ] }
350,268
I got a question about team managing. Right now I'm dealing with a junior developer who's working remotely from a coding factory. The guy is open to criticism and willing to learn, but I got some doubts how much should I push some stuff. Right now when something is straight and obvious a violation of good practices: like violation of SRP, God objects, non-meaningful names for methods or variables; I point out what he has to fix and try to explain why it is wrong. My question is: when do I stop? Right now if there are some minor violations of the coding style like variable names in the wrong language (previous team mixed Spanish and English and I'm trying to fix that), or some minor structural issues I'm letting go and fix it if I have any spare time or happen to need to modify the problematic class. I feel this is good for team morale so I'm not pushing back code constantly on what to a novice might seem like minor details, which can be quite frustrating, but I'm also worrying that being too 'soft' might prevent the guy from learning how to do some stuff. How do I balance the line between teaching the guy and not burning him out with constant criticism? For a junior it can be frustrating if you tell him to redo stuff that to his eyes is working.
If you think the code should be fixed before merging, make comments. Preferably with "why" so the dev can learn. Keep in mind code is read far more often than written. So things which seem "minor" can actually be really important (variable names for example). However, if you find yourself making comments which seem tedious, perhaps consider: Should your CI process catch these? Do you have a clear "developer guide" to reference (or is everything documented in your head)? Do these comments actually contribute to code quality? A lot of people sacrifice productivity at the altar of process or perfection. Be careful you don't do this. Try to visit your colleague in person if possible. Or use video calls. Building a relationship makes criticism (even code reviews) easier to manage. If you find that a piece of code has too many back/forth on issues, request the review over smaller pieces of code. Incremental changes are more likely to avoid some of the more significant design problems, because they are by definition smaller. But absolutely do not merge stuff and then go back and fix it. This is passive aggressive and if the developer finds you doing this, you will kill their morale.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/350268", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/225184/" ] }
350,403
Git by default does not set the file-time accordingly when the files are synced with the origin. It just ignores the file-time of the pushed files. Doesn't it make sense for the file's modification date to be set to the value of the last commit (remote or local), rather than it leaving it the same as the date it was fetched from the server? Git stores the last modification time for each file, based on its commit history. Why doesn't Git touch each file to their last commit time when the files are pulled from the remote repository? I know it's possible to modify the config for Git to achieve something like this, but what I'm asking is why Git doesn't set the file time to the time recorded in the commit history by default . If there is a particular reason why Git doesn't do this on default (other than it was a feature that nobody think would be useful), I'm interested to know about the decision against implementing this.
It's because it would break every build system like make , maven , gradle , etc. that depends on file modification times to know what needs to be rebuilt. If a git checkout or a git pull pulls in commits that are older than the last executable you built, it would give those files an older timestamp. make therefore won't detect them as an updated dependency, and won't include those in a new build without doing a make clean first. This is super annoying. There is git log for finding the last time a file was modified in version control and ls for finding the last time it was modified on your local disk, and it turns out there's good reasons for keeping those separate.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/350403", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/188871/" ] }
350,442
I understand test-driven development so far that you are only allowed to write productive code when you have a failing (red) unit test. Based on this I have the question if the test-driven approach can also be applied to other forms of tests.
The red green refactor cycle is built on one very sound principle: Only trust tests that you have seen both pass and fail. Yes that works with automated integration tests as well. Also manual tests. Heck, it works on car battery testers. This is how you test the test. Some think of unit tests as covering the smallest thing that can be tested. Some think of anything that's fast to test. TDD is more than just the red green refactor cycle but that part has a very specific set of tests: It's not the tests that you will ideally run once before submitting a collection of changes. It's the tests that you will run every time you make any change. To me, those are your unit tests.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/350442", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/265938/" ] }
350,470
One of the common arguments for using microservices is better scalability. But I wonder whether this argument is really valid. Lets say we had an application consisting of 10 microservice with 9 of them having each two instances (for redundancy) and one of them with 4 instances to handle the load (scalability). The pro-microservice argument is then that you are able to scale this miroservice indepently from the other services. However, lets say all 10 microservices were modules in a single monolith and that several (e. g. 22 like the sum from above) instances of this monolith were deployed. The system should be able to handle the load for the one critical part, because there are enough instances to do so. If instances are containing program logic not needed, the only downside would be, that the binary and the amount of needed RAM would be slightly larger. But then again, the difference shouldn't be too big in most cases - at least not compared to the rest of the stack (think of Spring Boot). The upside of a scaled monlith would be a simpler system without (most of) the fallacies of a distributed system. Am I missing something?
The point of microservices is not to reduce processor load. In fact, because of the overhead of communication and repetition of functions that used to be global utility code, it usually increases processor load somewhat. The point of abolishing a monolith is much more to be able to maintain, deploy and run a complex system of functionality at all . Once your system reaches a certain size, compiling, testing, deploying etc. a monolith becomes just too expensive to be feasible while maintaining a decent uptime. With microservices, you can upgrade, restart or roll back a system piecemeal. Make no mistake, we don't write microservices because it's inherently a better solution to couple things loosely over remote interfaces. In fact, the loss of strong type and consistency checking that a monolith could provide is often a major drawback. We do it because we have to because complexity has gotten the better of us, and are making the best of a suboptimal situation.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/350470", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/63946/" ] }
350,472
I have an acquaintance, a more seasoned developer than me. We were talking about programming practices and I was taken aback by his approach on 'if' statements. He insists on some practices regarding if statements that I find rather strange. Firstly , an if statement should be followed by an else statement, whether there is something to put into it or not. Which leads to code looking like this: if(condition) { doStuff(); return whatever; } else { } Secondly , it's better to test for true values rather than false. That means that it's better to test a 'doorClosed' variable instead of a '!doorOpened' variable His argument is that it makes clearer what the code is doing. Which confuses me quite a bit, as a combination of those two rules can lead him to write this kind of code if he wants to do something when the condition isn't met. if(condition) { } else { doStuff(); return whatever; } My feeling about this is that it's indeed very ugly and/or that the quality improvement, if there is any, is negligible. But as a junior, I am prone to doubt my instinct. So my questions are: Is it a good/bad/"doesn't matter" practice? Is it common practice?
Explicit else block The first rule just pollutes the code and makes it neither more readable, nor less error-prone. The goal of your colleague — I would suppose — is to be explicit, by showing that the developer was fully aware that the condition may evaluate to false . While it is a good thing to be explicit, such explicitness shouldn't come at a cost of three extra lines of code . I don't even mention the fact that an if statement isn't necessarily followed by either an else or nothing: It could be followed by one or more elif s too. The presence of the return statement makes things worse. Even if you actually had code to execute within the else block, it would be more readable to do it like this: if (something) { doStuff(); return whatever; } doOtherThings(); return somethingElse; This makes the code take two lines less, and unindents the else block. Guard clauses are all about that. Notice, however, that your colleague's technique could partially solve a very nasty pattern of stacked conditional blocks with no spaces: if (something) { } if (other) { } else { } In the previous code, the lack of a sane line break after the first if block makes it very easy to misinterpret the code. However, while your colleague's rule would make it more difficult to misread the code, an easier solution would be to simply add a newline. Test for true , not for false The second rule might make some sense, but not in its current form. It is not false that testing for a closed door is more intuitive than testing for a non-opened door . Negations, and especially nested negations, are usually difficult to understand: if (!this.IsMaster || (!this.Ready && !this.CanWrite)) To solve that, instead of adding empty blocks, create additional properties, when relevant, or local variables . The condition above could be made readable rather easily: if (this.IsSlave || (this.Synchronizing && this.ReadOnly))
{ "source": [ "https://softwareengineering.stackexchange.com/questions/350472", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/274794/" ] }
350,719
thanks to the .NET evolution, today we have a lot of different frameworks and technologies. I'm very confused about the differences about it. What is the difference between: .NET Framework ASP.NET .NET Core ASP.NET Core .NET Standard ?
.NET Framework is a VM, a JIT compiler, an object memory system consisting of a memory allocator and a garbage collector, a loader, a linker, and a runtime system (collectively called the Common Language Runtime (CLR) ) which executes and supports a language called Microsoft Intermediate Language (MSIL) . It is also a class library called the Base Class Library (BCL) containing fundamental data structures (strings, arrays, linked lists, hash dictionaries, …) and abstractions thereof (lists, dictionaries, …) as well as other fundamental types (tasks, functions, abstractions for equality) and algorithms. It also comes with a wider range of libraries called the Framework Class Library (FCL) which has support for developing desktop applications (e.g. WinForms and WPF), data manipulation (LINQ) and lots of other things. It is strictly a Windows-only, PC-only implementation. It is not Free Software or Open Source, although at least part of the source code is available under certain non-free non-open restrictive licenses; also a subset of the FCL is available as Reference Source under the MIT License. It is largely monolithic, based on assemblies. The Framework must be installed at the target. ASP.NET is a web framework built on the .NET Framework, and thus is also strictly Windows-only; it is also deeply tied to Internet Information Server (IIS) . It is also largely monolithic and based on assemblies. The Framework must be installed at the target. .NET Core is a VM, a JIT compiler, an object memory system consisting of a memory allocator and a garbage collector, a loader, a linker, and a runtime system (collectively called the CoreCLR ) which executes and supports a language called Microsoft Intermediate Language (MSIL) . It is also a class library called the Core Library containing fundamental data structures (strings, arrays, linked lists, hash dictionaries, …) and abstractions thereof (lists, dictionaries, …) as well as other fundamental types (tasks, functions, abstractions for equality) and algorithms. It also comes with a wider range of libraries called CoreFX which has support for developing desktop applications (UWP), data manipulation (LINQ) and lots of other things. It is designed to be highly portable, and Microsoft itself develops, releases, maintains and supports fully equal ports (with fully equal support, fully equal functionality and simultaneous releases) for Windows, macOS, and Linux on AMD64, x86, and ARM. It is highly modular, consisting of small NuGet packages that allow a "pay-as-you-go" style of development, paying (in the sense of memory, disk space, and maintenance) only for functionality that you actually use. The framework (or rather the parts of it that are used) is shipped with the app. .NET Core is functionality-wise a subset of the .NET Framework, but not API-wise: even for functionality implemented in .NET Core, the API is not necessarily the same as the one in .NET Framework. This is most obvious in cases where .NET Framework is tightly tied to Windows and .NET Core instead has either an OS-independent alternative or multiple OS-native ones. Another obvious example is reflection, which is closely tied to the underlying runtime, which was completely rewritten for .NET Core. So, a working .NET Core application does not automatically also work on .NET Framework, however since .NET Core is functionality-wise mostly a subset, there should be alternative APIs available that have the same functionality. The converse is not true: a working .NET Framework app can not necessarily be ported directly to .NET Core. .NET Core is the future of .NET at Microsoft. It is going to replace all the different slightly incompatible independent implementations of .NET inside Microsoft. ASP.NET Core is a web framework for .NET Core, and thus is highly portable. It is also not tied to any specific web server. It is, in fact, not even tied to a specific .NET implementation, it also works on .NET Framework. It is functionality-wise a subset of ASP.NET, but not necessarily API-compatible. It is highly modular, consisting of small NuGet packages that allow a "pay-as-you-go" style of development, paying (in the sense of memory, disk space, and maintenance) only for functionality that you actually use. The framework (or rather the parts of it that are used) is shipped with the app. .NET Standard is a common API specification for a common subset of APIs across .NET Framework, .NET Core, and Xamarin / Mono. Implementations supporting a specific version of .NET Standard can run all apps targeting that version (or a lower one), apps targeting a specific version of .NET Standard can run on all platforms implementing that version (or a higher one). ISO CLI There is also another (set of) standard(s) covering (parts of) the .NET ecosystem. Early on in the history of .NET, Microsoft submitted a set of standards to ECMA, and then on to ISO which cover a subset of .NET 2.0. These standards are called the *Common Language Infrastructure ( CLI )*, and consist of the Common Intermediate Language (CIL) (a high-level bytecode language), the Virtual Execution System (VES) (an abstract machine for executing CIL), the Common Type System (CTS) and the Common Language Specification (CLS) (specifications covering types and APIs that both CLI implementations and language implementations on top of the CLI must provide, and restrictions they must obey in order for languages to be able to interoperate seamlessly), a language-independent specification of program Metadata (including e.g. debug data, again for language interop), and a Standard Library .
{ "source": [ "https://softwareengineering.stackexchange.com/questions/350719", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/233528/" ] }
350,840
After browsing several answers an Stack Overflow, it is clear that some natively compiled languages have garbage collection . But it is unclear to me how exactly this would work. I understand how garbage collection could work with an interpreted language. The garbage collector would simply run alongside the interpreter and delete unused and unreachable objects from the program's memory. They are both running together. How would this work with compiled languages though? My understanding is that once the compiler has compiled the source code to the target code - specifically native machine code - it is done. Its job is finished. So how could the compiled program be garbage collected as well? Does the compiler work with the CPU in some way while the program is executed to delete "garbage" objects? Or does the compiler include some minimal garbage collector in the compiled program's executable. I believe my latter statement would have more validity than the former due to this excerpt from this answer on Stack Overflow : One such programming language is Eiffel. Most Eiffel compilers generate C code for portability reasons. This C code is used to produce machine code by a standard C compiler. Eiffel implementations provide GC (and sometimes even accurate GC) for this compiled code, and there is no need for VM. In particular, VisualEiffel compiler generated native x86 machine code directly with full GC support . The last statement seems to imply that the compiler includes some program in the final executable which acts as a garbage collector while the program is running. The page on the D language's website about garbage collection - which is natively compiled and has an optional garbage collector - also seems to hint that some background program runs alongside the original executable program to implement garbage collection. D is a systems programming language with support for garbage collection. Usually it is not necessary to free memory explicitly. Just allocate as needed, and the garbage collector will periodically return all unused memory to the pool of available memory. If the method mentioned above is used, how exactly would it work? Does the compiler store a copy of some garbage collection program and pastes it into each executable it generates? Or am I flawed in my thinking? If so, what methods are used for implementing garbage collection for compiled languages and how exactly would they work?
Does the compiler store a copy of some garbage collection program and paste it into each executable it generates? It sounds unelegant and weird, but yes. The compiler has an entire utility library, containing a whole lot more than just garbage collection code, and calls to this library will be inserted into each executable it creates. This is called the runtime library , and you'd be surprised how many different tasks it typically serves.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/350840", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/242544/" ] }
350,883
I wanted to know if the way I deal with source files that need to be deleted from version control could be regarded as bad practice. I want to explain it to you based on that example: I recently got very angry because I had to tediously sort out Java classes in a programme that were basically dead code however it was nowhere documented and also not commented in those Java classes. Of course they needed to be deleted but before I delete such redundant stuff I have a - some may say strange - habit: I do not delete such redundant files immediately via SVN->Delete (replace with delete command of your version control system of choice) but instead put comments in those files (I refer both at the head and at the footer) that they are going to be deleted + my name + the date and also - more importantly - WHY THEY ARE DELETED (in my case, because they were dead, confusing code). Then I save and commit them to version control. Next time when I have to commit/check in something in the project to version control, I press SVN->Delete and then they are eventually deleted in Version Control - still of course restorable through revisions though and this is why I adopted that habit. Why doing this instead of deleting them right away? My reason is, that I want to have explicit markers at least in the last revision in which those redundant files existed, why they deserved to be deleted. If I delete them right away, they are deleted but is nowhere documented why they were deleted. I want to avoid a typical scenario like this: "Hmm... why were those files deleted? I did work fine before." (Presses 'revert' -> guy who reverted then is gone forever or not available in the next weeks and the next assignee has to find out tediously like me what those files are about) But don't you note why those files were deleted in the commit messages? Of course I do but a commit message is sometimes not read by colleagues. It is not a typical situation that when you try to understand the (in my case dead) code that you first check the Version control log with all the associated commit messages. Instead of crawling through the log, a colleague can see right away that this file is useless. It saves her/his time and she/he knows that this file was probably was restored for bad (or at least it raises a question.
The problem with adding a comment to a file that it should be deleted, instead of deleting it in source control and putting the explanation there, is the assumption that if developers do not read commit messages that they will surely read comments in source code. From an outsider's perspective, this methodology seems to be rooted in a very conservative view of source control. "What if I delete this unused file and then somebody needs it?" someone might ask. You are using source control. Revert the change, or better yet talk to the person who deleted the file (communicate). "What if I delete the dead file, then somebody starts using it again and they make changes?" someone else might ask. Again, you are using source control. You'll get a merge conflict that a person must resolve. The answer here, as with the last question, is to communicate with your teammates. If you really doubt a file should be removed, communicate before deleting it from source control. Maybe it only recently stopped being used, but an upcoming feature might require it. You don't know that, but one of the other developers might. If it should be removed, remove it. Cut the fat out of the code base. If you made an "oopsie" and you actually need the file, remember that you are using source control so you can recover the file. Vincent Savard, in a comment on the question, said: ... If your colleagues don't read the commit messages and they resurrect a file that was rightfully deleted and it passes code reviewing, there's definitely something wrong in your team, and it's a great opportunity to teach them better. This is sound advice. Code reviews should be catching this kind of thing. Developers need to be consulting commit messages when an unexpected change is made to a file, or a file is removed or renamed. If the commit messages don't tell the story, then developers also need to be writing better commit messages. Being afraid to delete code or delete files is indicative of a deeper, systemic problem with the process: Lack of accurate code reviews Lack of understanding about how source control works Lack of team communication Poor commit messages on the part of developers These are the problems to address, so you don't feel like you are throwing rocks in a glass house when you delete code or files.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/350883", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/196048/" ] }
351,110
I'm reading "Learning Python" and have come across the following: User-defined exceptions can also signal nonerror conditions. For instance, a search routine can be coded to raise an exception when a match is found instead of returning a status flag for the caller to interpret. In the following, the try/except/else exception handler does the work of an if/else return-value tester: class Found(Exception): pass def searcher(): if ...success...: raise Found() # Raise exceptions instead of returning flags else: return Because Python is dynamically typed and polymorphic to the core, exceptions, rather than sentinel return values, are the generally preferred way to signal such conditions. I've seen this sort of thing discussed multiple times on various forums, and references to Python using StopIteration to end loops, but I can't find much in the official style guides (PEP 8 has one offhand reference to exceptions for flow control) or statements from developers. Is there anything official that states this is best practice for Python? This ( Are exceptions as control flow considered a serious antipattern? If so, Why? ) also has several commenters state that this style is Pythonic. What is this based on? TIA
The general consensus “don't use exceptions!” mostly comes from other languages and even there is sometimes outdated. In C++, throwing an exception is very costly due to “stack unwinding”. Every local variable declaration is like a with statement in Python, and the object in that variable may run destructors. These destructors are executed when an exception is thrown, but also when returning from a function. This “RAII idiom” is an integral language feature and is super important to write robust, correct code – so RAII versus cheap exceptions was a tradeoff that C++ decided towards RAII. In early C++, a lot of code was not written in an exception-safe manner: unless you actually use RAII, it is easy to leak memory and other resources. So throwing exceptions would render that code incorrect. This is no longer reasonable since even the C++ standard library uses exceptions: you can't pretend exceptions don't exist. However, exceptions are still an issue when combining C code with C++. In Java, every exception has an associated stack trace. The stack trace is very valuable when debugging errors, but is wasted effort when the exception is never printed, e.g. because it was only used for control flow. So in those languages exceptions are “too expensive” to be used as control flow. In Python this is less of an issue and exceptions are a lot cheaper. Additionally, the Python language already suffers from some overhead that makes the cost of exceptions unnoticeable compared to other control flow constructs: e.g. checking if a dict entry exists with an explicit membership test if key in the_dict: ... is generally exactly as fast as simply accessing the entry the_dict[key]; ... and checking if you get a KeyError. Some integral language features (e.g. generators) are designed in terms of exceptions. So while there is no technical reason to specifically avoid exceptions in Python, there is still the question whether you should use them instead of return values. The design-level problems with exceptions are: they are not at all obvious. You can't easily look at a function and see which exceptions it may throw, so you don't always know what to catch. The return value tends to be more well-defined. exceptions are non-local control flow which complicates your code. When you throw an exception, you don't know where the control flow will resume. For errors that can't be immediately handled this is probably a good idea, when notifying your caller of a condition this is entirely unnecessary. Python culture is generally slanted in favour of exceptions, but it's easy to go overboard. Imagine a list_contains(the_list, item) function that checks whether the list contains an item equal to that item. If the result is communicated via exceptions that is absolutely annoying, because we have to call it like this: try: list_contains(invited_guests, person_at_door) except Found: print("Oh, hello {}!".format(person_at_door)) except NotFound: print("Who are you?") Returning a bool would be much clearer: if list_contains(invited_guests, person_at_door): print("Oh, hello {}!".format(person_at_door)) else: print("Who are you?") If the function is already supposed to return a value, then returning a special value for special conditions is rather error-prone, because people will forget to check this value (that's probably the cause of 1/3 of the problems in C). An exception is usually more correct. A good example is a pos = find_string(haystack, needle) function that searches for the first occurrence of the needle string in the `haystack string, and returns the start position. But what if they haystack-string does not contain the needle-string? The solution by C and mimicked by Python is to return a special value. In C this is a null pointer, in Python this is -1 . This will lead to surprising results when the position is used as a string index without checking, especially as -1 is a valid index in Python. In C, your NULL pointer will at least give you a segfault. In PHP, a special value of a different type is returned: the boolean FALSE instead of an integer. As it turns out this isn't actually any better due to the implicit conversion rules of the language (but note that in Python as well booleans can be used as ints!). Functions that do not return a consistent type are generally considered very confusing. A more robust variant would have been to throw an exception when the string can't be found, which makes sure that during normal control flow it is impossible to accidentally use the special value in place of an ordinary value: try: pos = find_string(haystack, needle) do_something_with(pos) except NotFound: ... Alternatively, always returning a type that can't be used directly but must first be unwrapped can be used, e.g. a result-bool tuple where the boolean indicates whether an exception occurred or if the result is usable. Then: pos, ok = find_string(haystack, needle) if not ok: ... do_something_with(pos) This forces you to handle problems immediately, but it gets annoying very quickly. It also prevents you from chaining function easily. Every function call now needs three lines of code. Golang is a language that thinks this nuisance is worth the safety. So to summarize, exceptions are not entirely without problems and can definitively be overused, especially when they replace a “normal” return value. But when used to signal special conditions (not necessarily just errors), then exceptions can help you to develop APIs that are clean, intuitive, easy to use, and difficult to misuse.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/351110", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/275667/" ] }
351,126
I've always used JSON files for configuration of my applications. I started using them from when I coded a lot of Java, and now I'm working mainly on server-side and data science Python development and am not sure if JSON is the right way to go any more. I've seen Celery use actual Python files for configuration. Initially I was skeptical about it. But the idea of using simple Python data structures for configuration is starting to grow on me. Some pros: The data structures will be the same as I'm normally coding in. So, I don't need to change frame of mind. My IDE (PyCharm) understands the connection between configuration and code. Ctrl + B makes it possible to jump between configuration and code easily. I don't need to work with IMO unnecessary strict JSON . I'm looking at you double quotes, no trailing commas and no comments. I can write testing configurations in the application I'm working on, then easily port them to a configuration file without having to do any conversion and JSON parsing. It is possible to do very simple scripting in the configuration file if really necessary. (Although this should be very, very limited.) So, my question is: If I switch, how am I shooting myself in the foot? No unskilled end user will be using the configuration files. Any changes to the configuration files are currently committed to Git and are rolled out to our servers as part of continuous deployment. There are no manual configuration changes, unless there is an emergency or it is in development. (I've considered YAML , but something about it irks me. So, for now it is off the table.)
Using a scripting language in place of a config file looks great at first glance: you have the full power of that language available and can simply eval() or import it. In practice, there are a few gotchas: it is a programming language, which needs to be learnt. To edit the config, you need to know this language sufficiently well. Configuration files typically have a simpler format that is more difficult to get wrong. it is a programming language, which means that the config can get difficult to debug. With a normal config file you look at it and see what values are provided for each property. With a script, you potentially need to execute it first to see the values. it is a programming language, which makes it difficult to maintain a clear separation between the configuration and the actual program. Sometimes you do want this kind of extensibility, but at that point you are probably rather looking for a real plugin system. it is a programming language, which means that the config can do anything that the programming language can do. So either you are using a sandbox solution which negates much of the flexibility of the language, or you are placing high trust in the config author. So using a script for configuration is likely OK if the audience of your tool is developers, e.g. Sphinx config or the setup.py in Python projects. Other programs with executable configuration are shells like Bash, and editors like Vim. Using a programming language for configuration is necessary if the config contains many conditional sections, or if it provides callbacks/plugins. Using a script directly instead of eval()-ing some config field tends to be more debuggable (think of the stack traces and line numbers!). Directly using a programming language may also be a good idea if your config is so repetitive that you are writing scripts to autogenerate the config. But perhaps a better data model for the config could remove the need for such explicit configuration? For example, it may be helpful if the config file can contain placeholders that you later expand. Another feature sometimes seen is multiple config files with different precedence that can override each other, though that introduces some problems of its own. In the majority of cases, INI files, Java property files, or YAML documents are much better suited for configuration. For complex data models, XML may also be applicable. As you've noted, JSON has some aspects that make it unsuitable as a human-editable configuration file, although it is a fine data exchange format.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/351126", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/93760/" ] }
351,185
There's a unix shell command ( udevadm info -q path -n /dev/ttyUSB2 ) that I want to call from a C program. With probably about a week of struggle, I could re-implement it myself, but I don't want to do that. Is it widely accepted good practice for me to just call popen("my_command", "r"); , or will that introduce unacceptable security problems and forwards compatibility issues? It feels wrong to me to do something like this, but I can't put my finger on why it would be bad.
It's not particularly bad, but there are some caveats. how portable will your solution be? Will your chosen binary operate the same everywhere, output the results in the same format etc.? Will it output differently on settings of LANG etc.? how much extra load does this add on your process? Forking a binary results in a lot more load and requires more resources than executing library calls (generally speaking). Is this acceptable in your scenario? Are there security issues? Can someone substitute your chosen binary with another, and perform nefarious deeds thereafter? Do you use user-supplied args for your binary, and could they provide ;rm -rf / (for example) (note that some APIs will allow you to specify args more securely than just providing them on the command line) I'm generally happy executing binaries when I'm in a known environment that I can predict, when the binary output is easy to parse (if required - you may just need an exit code) and I don't need to do it too often. As you've noted, the other issue is how much work is it to replicate what the binary does? Does it use a library you can also leverage off?
{ "source": [ "https://softwareengineering.stackexchange.com/questions/351185", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/275787/" ] }
351,244
I have heard a lot of times when other developers use that phrase to "advertise" some patterns or developing best practices. Most of the time this phrase is used when you are talking about benefits of functional programming. The phrase "Easy to reason about" has been used as it is, without any explanation or code sample. So for me it becomes like the next "buzz"-word, which more "experienced" developers use in their talks. Question: Can you provide some examples of "Not easy to reason about", so it can be compared with "Easy to reason about" examples?
To my mind, the phrase "easy to reason about", refers to code that is easy to "execute in your head". When looking at a piece of code, if it is short, clearly written, with good names and minimal mutation of values, then mentally working through what the code does is a (relatively) easy task. A long piece of code with poor names, variables that constantly change value and convoluted branching will normally require eg a pen and piece of paper to help keep track of the current state. Such code therefore cannot be easily worked through just in your head, So such code isn't easy to reason about.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/351244", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/102569/" ] }
351,360
I've inherited some awful code that I've included a short sample of below. Is there a name for this particular anti-pattern? What are some recommendations for refactoring this? // 0=Need to log in / present username and password // 2=Already logged in // 3=Inactive User found // 4=Valid User found-establish their session // 5=Valid User found with password change needed-establish their session // 6=Invalid User based on app login // 7=Invalid User based on network login // 8=User is from an non-approved remote address // 9=User account is locked // 10=Next failed login, the user account will be locked public int processLogin(HttpServletRequest request, HttpServletResponse response, int pwChangeDays, ServletContext ServContext) { }
The code is bad not only because the magic numbers , but because it coalesces several meanings in the return code, hiding inside of its meaning an error, a warning, a permission to create a session or a combination of the three, which makes it a bad input for decision making. I would suggest the following refactoring: returning an enum with the possible results (as suggested in other answers), but adding to the enum an attribute indicating whether it is a denial, a waiver (I'll let you pass this last time) or if it is OK (PASS): public LoginResult processLogin(HttpServletRequest request, HttpServletResponse response, int pwChangeDays, ServletContext ServContext) { } ==> LoginResult.java <== public enum LoginResult { NOT_LOGGED_IN(Severity.DENIAL), ALREADY_LOGGED_IN(Severity.PASS), INACTIVE_USER(Severity.DENIAL), VALID_USER(Severity.PASS), NEEDS_PASSWORD_CHANGE(Severity.WAIVER), INVALID_APP_USER(Severity.DENIAL), INVALID_NETWORK_USER(Severity.DENIAL), NON_APPROVED_ADDRESS(Severity.DENIAL), ACCOUNT_LOCKED(Severity.DENIAL), ACCOUNT_WILL_BE_LOCKED(Severity.WAIVER); private Severity severity; private LoginResult(Severity severity) { this.severity = severity; } public Severity getSeverity() { return this.severity; } } ==> Severity.java <== public enum Severity { PASS, WAIVER, DENIAL; } ==> Test.java <== public class Test { public static void main(String[] args) { for (LoginResult r: LoginResult.values()){ System.out.println(r + " " +r.getSeverity()); } } } Output for Test.java showing the severity for each LoginResult: NOT_LOGGED_IN : DENIAL ALREADY_LOGGED_IN : PASS INACTIVE_USER : DENIAL VALID_USER : PASS NEEDS_PASSWORD_CHANGE : WAIVER INVALID_APP_USER : DENIAL INVALID_NETWORK_USER : DENIAL NON_APPROVED_ADDRESS : DENIAL ACCOUNT_LOCKED : DENIAL ACCOUNT_WILL_BE_LOCKED : WAIVER Based on both the enum value and its severity, you can decide whether creation of session proceeds or not. EDIT: As a response to @T.Sar's comment, I changed the severity's possible values to PASS,WAIVER and DENIAL instead of (OK,WARNING and ERROR). That way it is clear that a DENIAL (previously ERROR) is not an error per se and shouldn't necessarily translate into throwing an exception. The caller examines the object and decides whether or not to throw an exception, but DENIAL is a valid result status resulting from calling processLogin(...) . PASS: go ahead, create a session if one doesn't already exist WAIVER: go ahead this time, but next time user you may not be allowed to pass DENIAL: sorry, user cannot pass, don't create a session
{ "source": [ "https://softwareengineering.stackexchange.com/questions/351360", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/237692/" ] }
351,408
How can I tell my software has too much abstraction and too many design patterns, or the other way round, how do I know if it should have more of them? Developers I work with are programming differently concerning these points. Some do abstract every little function, use design patterns wherever possible and avoid redundancy at any cost. The others, including me, try to be more pragmatic, and write code that is not perfectly fitting every design pattern, but is way faster to understand because less abstraction is applied. I know this is a trade-off. How can I tell when there is put enough abstraction into the project and how do I know it needs more? Example, when a generic caching layer is written using Memcache. Do we really need Memcache , MemcacheAdapter , MemcacheInterface , AbstractCache , CacheFactory , CacheConnector , ... or is this easier to maintain and still good code when using only half of those classes? Found this in Twitter: ( https://twitter.com/rawkode/status/875318003306565633 )
How many ingredients are necessary for a meal? How many parts do you need to build a vehicle? You know that you have too little abstraction when a little implementation change leads to a cascade of changes all over your code. Proper abstractions would help isolating the part of the code which needs to be changed. You know that you have too much abstraction when a little interface change leads to a cascade of changes all over your code, at different levels. Instead of changing the interface between two classes, you find yourself modifying dozens of classes and interfaces just to add a property or change a type of a method argument. Aside that, there is really no way to answer the question by giving a number. The number of abstractions won't be the same from project to project, from a language to another, and even from one developer to another one.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/351408", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/126971/" ] }
351,419
I'm studying up on clean and as a result am quite dramatically rethinking a great deal of how I design and write software. I've thing I'm still wrestling with however, is for business rules like "on save updates to some item, first load All the list of items I have permission to view/edit etc, confirm that this item is in the list, and that the item category is not currently locked from use, (and other rules etc etc)".. because that is a (complex but not atypical) business rule, and so should be handled in the application domain rather than push business logic into the db/persistence layer. However it seems to me that to efficiently check​ these conditions it is often going to be best handled with a nicely crafted db query, rather than loading all data into the application domain... Without prematurely optimization, what's a recommended approach or some uncle Bob articles dealing with this question? Or would he say "validate in the domain until it becomes a problem"?? I am really struggling to find any good examples / samples for anything other than the most basic of use cases. Update: Hi all, thanks for the replies. I should have been clearer, I've been writing (mostly web app) software for a long time, and have definitely already experienced and agree with all the topics you collectively describe (validate by backend, don't trust client data, generally speaking chase raw efficiency only when required, however acknowledge strengths of the db tools when available, etc etc) and have gone through the developer learning lifecycle of "throw it all together" to "build a giant fat controller with N-tiers applications" code trends, and now really liking and investigating the clean / single responsibility style etc, basically as the result of a few projects recently that evolved into quite clunky and widely-distributed business rules as the projects evolved and further client requirements came to light. In particular, I'm looking at Clean style architecture in the context of building REST apis for client-facing as well as internal-usage functionality, where many of the business rules might be much more complex than basically every example you see on the net (even by the Clean / Hex architecture guys themselves). So I guess I was really asking (and failed to state clearly) about how Clean and a REST api would sit together, where most MVC stuff you see these days has incoming request validators (e.g FluentValidation library in .NET), but where many of my "validation" rules are not so much "is this a string of less than 50 characters" but more "can this user calling this usercase/interactor perform this operation on this collection of data given that some related object is currently locked by Team X until later in the month etc etc"... those kind of deeply involved validations where LOTS of business domain objects and domain rules are applicable. Should I spin those rules out into a specific kind of Validator-object type to accompany each usecase-interactor (inspired by the FluentValidator project but with more business logic and data access involved), should I treat the validation somewhat like a Gateway, should i put those validations IN a gateway (which i think is wrong), etc etc. For reference, I am going off several articles like this , but Mattia doesn't discuss validation much. But I guess the short answer to my question is much like the answer that I have accepted: "It's never easy, and it depends".
Validation of data entry is one of those things where everyone starts out trying to make it pure and clean and (if they're smart about it) eventually gives up, because there are so many competing concerns. The UI layer must do some forms of validation right there on the client page/form in order to provide realtime feedback to the user. Otherwise the user spends a lot of time waiting for feedback while a transaction posts across the network. Because the client often runs on an untrusted machine (e.g. in nearly all web applications), these validation routines must be executed again server side where the code is trusted. Some forms of validation are implicit due to input constraints; for example, a textbox may allow only numeric entry. This means that you might not have a "is it numeric?" validator on the page, but you will still need one on the back end, somewhere, since UI constraints could be bypassed (e.g. by disabling Javascript). The UI layer must do some forms of validation at the service perimeter (e.g. server-side code in a web application) in order to insulate the system against injection attacks or other malicious forms of data entry. Sometimes this validation isn't even in your code base, e.g. ASP.NET request validation . The UI layer must do some forms of validation just to convert user-entered data into a format that the business layer can understand; for example, it must turn the string "6/26/2017" into a DateTime object in the appropriate time zone. The business layer should do most forms of validation because, hey, they belong in the business layer, in theory. Some forms of validation are more efficient at the database layer, especially when referential integrity checks are needed (e.g. to ensure that a state code is in the list of 50 valid states). Some forms of validation must occur in the context of a database transaction due to concurrency concerns, e.g. reserving a unique user name has to be atomic so some other user doesn't grab it while you are processing. Some forms of validation can only be performed by third party services, e.g. when validating that a postal code and a city name go together. Throughout the system, null checks and data conversion checks may occur at multiple layers, to ensure reasonable failure modes in the presence of code flaws. I have seen some developers try to codify all the validation rules in the business layer, and then have the other layers call it to extract the business rules and reconstruct the validation at a different layer. In theory this would be great because you end up with a single source of truth. But I have never, ever seen this approach do anything other than needlessly complicate the solution, and it often ends very badly. So if you're killing yourself trying to figure out where your validation code goes, be advised-- in a practical solution to even a moderately complex problem, validation code will end up going in several places.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/351419", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/238598/" ] }
351,607
Context My team of 8 engineers is currently transitioning to Git (from Subversion) for our next big thing. We have a handful of 'more experienced' engineers that are finding it quite difficult to pick up Git. I get asked the same trivial questions despite having provided user manuals, training activities and whiteboard sessions. We had two Junior consultants who picked everything up in a few days and it really shone a light on the issue. This isn't a pattern that is limited to Git but it has become visible as a result. Question I don't feel particularly favourably to engineers who can't/won't learn - especially staff with the levels of seniority we have here. However, I do want the team to succeed and build a great product. We are using a centralised Git Flow model and I feel like all the new terminology is baffling them. Is there anything I can do to help these employees to learn Git? Sourcetree is the client that is being used by the whole team.
Give them a toy. Git is hard. Especially if you've been doing source control in a different paradigm. I broke the build the first time I tried to work with git. It made me so paranoid that I didn't want to check in until everything was done. I was hiding versions in folders. Then I finally figured out what I needed to get past it: I needed a safe place to play. Once I had that, I was intentionally causing problems so I could learn how to fix them—all in my safe place. I developed a pattern I could use even if interrupted and still get back into a good state. Before long, people were coming to me for help with git. All because I took the time to play with a toy. If you just toss them into the deep end, you'll be lucky if they manage to float.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/351607", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/105242/" ] }
351,727
How does git help deal with the scenario below: I have a task broken down into 2 parts: backend task and frontend task. I make a pull request to merge the backend changes and wait for it to be merged (and address feedback). While waiting, I can't really work on the frontend changes because it depends on backend changes and those are not available on the master branch yet. What is the best way to pull in changes to frontend changes branch from the backend changes branch while it is still being reviewed?
Hold on, skip merging For this approach, you do not want to merge your feature_a into feature_b repeatedly. Rebasing has been mentioned in other answers, but only for rebasing things onto master . What you want to do in your case is: Start your feature_b from feature_a , i.e.: git checkout feature_a git checkout -b feature_b Whenever feature_a changes while it is waiting to get merged into master , you rebase feature_b on it: ... commit something onto feature_a ... git checkout feature_b git rebase feature_a Finally, as soon as feature_a has been merged into master , you simply get the new master and rebase feature_a onto it a last time: git checkout master git pull origin master git checkout feature_b git rebase --onto master feature_a feature_b This final rebase will graft all commits that are dangling from the feature_a commit (which is now irrelevant as it has been merged into master ) right onto master . Your feature_b is now a simple, standard branch going right from master . EDIT: inspired from the comments, a little heads up: if you need to make some change which affects both features, then be sure to make it in feature_a (and then rebase as shown). Do not make it in two different commits in both branches, even if may be tempting; as feature_a is part of the history of feature_b , having the single change in two different commits will be semantically wrong and possibly lead to conflicts or "resurrections" of unwanted code, later.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/351727", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/55793/" ] }
351,735
Is a segfault (array index out of bounds) always the programmer's mistake or could it be misuse from the user?
Unless part of the specification is "In such-and-such a circumstance, invoke undefined behaviour" (C/C++) or "trigger an IndexOutOfBoundsException ", it is always the programmer's fault. The task of a program is to react adequately to all inputs, and that includes faulty, incomplete or even actively subversive input. In general, if the user provides unusable input, a program should give a determinate response, such as an error message or a repeat prompt, and not an implementation-defined reaction by the runtime system; such behaviour is usually not useful for the user and may cause security vulnerabilities.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/351735", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/12893/" ] }
352,112
Note: When I used "complex" in the title, I mean that the expression has many operators and operands. Not that the expression itself is complex. I've recently been working on a simple compiler to x86-64 assembly. I've finished the compiler's main front end - the lexer and parser - and am now able to generate an Abstract Syntax Tree representation of my program. And since my language will be statically typed, I am now doing the next phase: type checking the source code. However, I've come to a problem and have not been able to reasonably solve it myself. Consider the following example: My compiler's parser has read this line of code: int a = 1 + 2 - 3 * 4 - 5 And converted it to the following AST: = / \ a(int) \ - / \ - 5 / \ + * / \ / \ 1 2 3 4 Now it must type check the AST. it starts by first type checking the = operator. It first checks the left hand side of the operator. It sees that the variable a is declared as an integer. So it must now verify that the right hand side expression evaluates to an integer. I understand how this could be done if the expression was just a single value, such as 1 or 'a' . But how would this be done for expressions with multiple values and operands - a complex expression - such as the one above? To correctly determine the value of the expression, it seems like the type checker would actual have to execute the expression itself and record the result. But this obviously seems to defeat the purpose of separating the compilation and execution phases. The only other way I imagine this could be done is to recursively check the leaf of each subexpression in the AST and verify all of the leaf's types match the expected operator type. So starting with the = operator, the type checker would then scan all of the left hand side's AST and verify that the leafs are all integers. It would then repeat this for each operator in the subexpression. I've tried researching the topic in my copy of "The Dragon Book" , but it doesn't seem go in to much detail, and simply reiterates what I already know. What is the usual method used when a compiler is type checking expressions with many operators and operands? Are any of the methods I mentioned above used? If not, what are the methods and how exactly would they work?
What is the usually method used when a compiler is type checking expressions with many operators and operands. Read wikipages on type system and type inference and on Hindley-Milner type system , which uses unification . Read also about denotational semantics and operational semantics . Type checking can be simpler if: all your variables like a are explicitly declared with a type. This is like C or Pascal or C++98, but not like C++11 which has some type inference with auto . all literal values like 1 , 2 , or 'c' have an inherent type: an int literal always has type int , a character literal always has type char , …. functions and operators are not overloaded, e.g. the + operator always has type (int, int) -> int . C has overloading for operators ( + works for signed and unsigned integer types and for doubles) but no overloading of functions. Under these constraints, a bottom up recursive AST type decoration algorithm could be enough (this only cares about types , not about concrete values, so is a compile-time approach): For each scope, you keep a table for the types of all visible variables (called the environment). After a declaration int a , you would add the entry a: int to the table. Typing of leaves is the trivial recursion base case: the type of literals like 1 is already known, and the type of variables like a can be looked up in the environment. To type a expression with some operator and operands according to the previously computed types of the (nested sub-expressions) operands, we use recursion on the operands (so we type first these sub-expressions) and follow the typing rules related to the operator. So in your example, 4 * 3 and 1 + 2 are typed int because 4 & 3 and 1 & 2 have been previously typed int and your typing rules say that the sum or product of two int -s is an int , and so on for (4 * 3) - (1 + 2) . Then Read Pierce's Types and Programming Languages book. I recommend to learn a tiny bit of Ocaml and λ-calculus For more dynamically typed languages (Lisp like) read also Queinnec's Lisp In Small Pieces Read also Scott's Programming Languages Pragmatics book BTW, you can't have a language agnostic typing code, because the type system is an essential part of the language's semantics .
{ "source": [ "https://softwareengineering.stackexchange.com/questions/352112", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/242544/" ] }
352,157
What's the difference between "to" and "as" method name prefixes like toList(), asList(), etc... When to use which when designing a method?
A toXYZ() function is expected to do a conversion, and to return a new independent object (though immutability allows for optimization, java.lang.String.toString() just returns the object). As an example, in C++ we have std::bitset::to_ulong() which can easily fail, and a whole plethora of to_string() , all doing a (more or less) complex conversion and allocating memory. An asXYZ() function on the other hand is expected to return a (potentially different) view of the source, doing minimal work. As an example, in C++ we have std::as_const() which just returns a constant reference, and the more involved std::forward_as_tuple which also refers to its arguments by reference.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/352157", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/210500/" ] }
352,322
In many years of OO programming I've understood what discriminated unions are, but I never really missed them. I've recently been doing some functional programming in C# and now I find I keep wishing I had them. This is baffling me because on the face of it, the concept of discriminated unions seems quite independent of the functional/OO dichotomy. Is there something inherent in functional programming that makes discriminated unions more useful than they would be in OO, or is it that by forcing myself to analyse the problem in a "better" way, I've simply upped my standards and now demand a better model?
Discriminated unions really shines in conjunction with pattern-matching, where you select different behavior depending on the cases. But this pattern is fundamentally antithetical to pure OO principles. In pure OO, differences in behavior should be defined by the types (objects) themselves and encapsulated. So the equivalence to pattern matching would be to call a single method on the object itself, which is then overloaded by the sub-types in question to define different behavior. Inspecting the type of an object from the outside (which is what pattern matching does) is considered an antipattern. The fundamental difference is that data and behavior is separate in functional programming, while data and behavior are encapsulated together in OO. This is the historical reason. A language like C# is developing from a classic OO language to multi-paradigm language by incorporating more and more function features.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/352322", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/187486/" ] }
352,507
I'm going through an introductory programming book and it lists a simple example in pseudocode: Start input myNumber set myAnswer = myNumber * 2 output myAnswer Stop Why can't we omit creating another variable called myAnswer and just put the operation into the output command, like this: Start input myNumber output myNumber * 2 Stop Why is the former correct and latter not?
You could but the other is so you see what is going on and so you can use myAnswer later in the program. If you use your second one, you cannot reuse myAnswer . So later down in the program you might want: myAnswer + 5 myAnswer + 1 etc. You might have different operations you want to use it for. Consider swapping numbers: Start input myNumber set myAnswerA = myNumber * 2 output myAnswerA set myAnswerB = myNumber * 3 output myAnswerB set temp = myAnswerA set myAnswerA = myAnswerB set myAnswerA = temp output myAnswerA output myAnswerB Stop That would be difficult without variables. Computer books start real basic, and most programming is easy until you see complexity. Most everything is trivial in tutorials, and it is only in complexity do you see where things do or do not make sense.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/352507", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/277569/" ] }
352,516
Before I began developing a RESTful API I used a query similar to this: $query = "INSERT INTO availability (user_id, date, status) " . "VALUES ('".$id."', '".$date."', '".$status."') " . "ON DUPLICATE KEY " . "UPDATE status='".$status."'"; Yes, I know it's subject to SQL injection. Anyway, I'm having trouble deciding if this should be a POST or PUT request since it can insert or update. I got to thinking: maybe it's better to have both POST and PUT methods in the API and then the client determines which one to call. Is this usually how RESTful APIs handle this scenario?
You could but the other is so you see what is going on and so you can use myAnswer later in the program. If you use your second one, you cannot reuse myAnswer . So later down in the program you might want: myAnswer + 5 myAnswer + 1 etc. You might have different operations you want to use it for. Consider swapping numbers: Start input myNumber set myAnswerA = myNumber * 2 output myAnswerA set myAnswerB = myNumber * 3 output myAnswerB set temp = myAnswerA set myAnswerA = myAnswerB set myAnswerA = temp output myAnswerA output myAnswerB Stop That would be difficult without variables. Computer books start real basic, and most programming is easy until you see complexity. Most everything is trivial in tutorials, and it is only in complexity do you see where things do or do not make sense.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/352516", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/274856/" ] }
352,526
Background: I am new to testing in general, and have been studying it in context of JavaScript, specifically React.js, front-end development (actually new to this as well). For the question, I have these 2 similar cases: Case 1: My code and a library I'm using UI library and aside from its decent design, I want to leverage its form validation utilities. I want to test this validation, but not sure how to do it. Option 1: Just test for the behaviors I expect It would be testing if my field (which is a sub-module of the library itself) receives the library-specific classes for fields with errors as expected, and if these errors prevent from calling the submit handler I passed to the form (also sub-module of the library). Problem A with Option 1 I am very likely to be duplicating tests that are already covered by the library itself. (e.g. I am testing if the field receives the error classes when the input is invalid when the library itself has tested if it does so, given the right configuration.) Problem B with Option 1 It slightly couples the test code to the library, i.e. I have to use library-specific classes and markup to evaluate my code. Okay, this is actually a side-question, is this a bad thing for a test? Is this making the test brittle, or is what they call 'contract' that is actually necessary for unit tests? Option 2: Test which sub-module is used and what configuration is passed Problem A with Option 2 This is much more brittle, I think, It's almost like repeating the implementation (i.e. repeating the type sub-module used, and the configuration passed to it.) Problem B with Option 2 Same as Option 1's Problem A -- coupling test to the library's API, specifically its configuration API. (Again, not sure if this is just test being brittle, or contract being written) Case 2: My code and its sub-component I have my good 'ol to do app . Its to do list component has addTodo() method and it passes to its sub-component to do field . I want to test its feature to add todo item, but not sure on how to do this either. Option 1: Again, just test for the behaviors I expect Test `to do app` that if input and submit with sub-component `to do field`, another `to do item` is added. (This is implemented by passing `addTodo()` method from `to do app` to `to do field` as onSubmit handler) Problem A with Option 1 If I'll do this, is there still a point in unit testing the to do field alone? If I already unit-tested to do field that it calls any onSubmit handler upon submitting, wouldn't this to do app test case indirectly repeating to do field 's test case?? Option 2: Test if correct sub-component is used and what configuration is passed (Similar to Case 1's option 2) Its specs would look like this: Unit test to do app to assert... that to do app 's addTodo() method adds a todo item properly that to do app renders to do field sub-component and that to do app passes addTodo() to as onSubmit handler to to do field then also unit test to do field to assert that... that to do field calls the onSubmit, which is received from parent component, during submit The Question Which of these options are better, especially if I favor BDD over TDD? Also, please correct me if you noticed I misunderstood anything. Thank you.
You could but the other is so you see what is going on and so you can use myAnswer later in the program. If you use your second one, you cannot reuse myAnswer . So later down in the program you might want: myAnswer + 5 myAnswer + 1 etc. You might have different operations you want to use it for. Consider swapping numbers: Start input myNumber set myAnswerA = myNumber * 2 output myAnswerA set myAnswerB = myNumber * 3 output myAnswerB set temp = myAnswerA set myAnswerA = myAnswerB set myAnswerA = temp output myAnswerA output myAnswerB Stop That would be difficult without variables. Computer books start real basic, and most programming is easy until you see complexity. Most everything is trivial in tutorials, and it is only in complexity do you see where things do or do not make sense.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/352526", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/272980/" ] }
352,578
I'm working at a company on a project for their Sales department. It's my first professional programming job, but I've been coding by myself and learning for years. Part of the project involves taking some data and combining it with input to produce and graph. Then save the data...so on and so forth. So I wrote the code for this in a little under a day. The next day I showed my project supervisor, and he liked it, but "what if we had this", and wanted me to add something to the graph. This was not a huge change to the look or function of the program, but it drastically changed how I needed to be storing data, processing it, etc. Again, it took me about a day to re-structure the database table, and rewrite the code basically from scratch to support this new request. I took it back to him again, and the exact same thing happened. He requested something else which drastically changed how I needed to process the data. So, I had to rewrite it again. Finally he signed off on it, and hopefully, I won't have to rewrite it again. Just be clear, I'm not bashing my manager or anything like that. He's a great guy and the things he was requesting were not out of this world, they just were incompatible with what I had previously done. I'm just wondering if there's anything I can do in the future to avoid complete rewrites. I understand making flexible code and was trying to do so, but I would just like to know of any practices or things I could have done differently to make this easier, so, in the future, I don't spend 3 days on something that should've taken 1.
As I commented, I have a strong feeling that the requirements were not clear the first time or probably you missed some important details. Not everything can be addressed with better code, best practices, design patterns or OOP principles. None of them will prevent you from redoing the whole application if the implementation is based on false assumptions or wrong premises. Don't rush into coding the solution. Before typing down a single LOC, spend some time on clarifying the requirements. The deeper you delve into the requirements, the more what if questions appear. Don't wait for the Manager to surprise you with next what-if . Anticipate things yourself. This little exercise can reduce significantly the surprise factor . Don't be afraid to ask as many times as you need. Sometimes the trees (details) don't let us see the forest (the overall picture). And it's the forest that we need to see first. When requirements are clear, it's easier to make better decisions during the design phase. Finally, remember that the overall picture is a goal. The route to this goal is neither plain nor straightforward. Changes will continue to happen, so be agile.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/352578", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/275152/" ] }
352,596
Recently I started a project which didn't seem too hard to make, the concept was a fairly simple application that had to accept input every now and then (maybe 10x a day), and try to perform some operations on them and collect all results at the end. This application would then get a front-end web portal that customers could use to view the results, not exactly rocket science. For this I initially made smart use of Python's built-in concurrency libraries ( ThreadPoolExecutor ) and use an easy-to-use library for the front-end (I chose Flask as it's easy for beginners and is relatively easy to maintain and test). Once we were halfway the project, the PM stated we had to use third party message queue capabilities instead of threads and had to implement load balancing, what eventually ended up happening was that we eventually started working with Celery, Redis, RabbitMQ, Nginx, uWSGI and a bunch of other large third party services which nobody had any real experience with. In the end this lead to a bunch of spaghetti code, untestable tasks (because of the complexity of third party libraries, patching the code didn't even work) and a bunch of headaches because nobody even knew what the added value of these services were. Before you say "Yes you should use those services", keep in mind nobody knows how to use these or even knows what they do besides introduce race-condition plagued code. What should I do about this? At this point it would simply be too costly to revert back to what we had and the PM is dead-set on using these services, even though the end-product is now worse off than it was in the beginning. Is there even any use in discussing this with him? Do I ask for more time? Or the harsh answer, am I just too stupid for my job?
Once we were halfway the project, the PM stated we had to use third party message queue capabilities instead of threads and had to implement load balancing This isn't an appropriate thing for a PM to "state" unilaterally. Two reasons: Design decisions should be made by a technical resource and only in response to NFRs . So politely ask your PM if there is a new NFR and if you could please have details. If an NFR is being introduced halfway through the project, it should probably be done via a change control . The change control is very important from a governance perspective; it would not only be an input to your requirements, but also is an important input to QA's test cases, operations' deployment and support handbook, and (here is the really important part) the PM's schedule . If the new requirement introduces more work, the development team ought to have an opportunity to communicate new development estimates, and the PM will have to decide whether they can live with the new date, add more resources, or push back on the stakeholder who introduced the NFR. Now if there really is a bona fide NFR, and there is no getting around it, it may also be appropriate to request new or different resources who have familiarity with the technologies that are being introduced, or request a training budget for some of your existing resources. So there is a cost aspect as well. If you speak the PM's language – schedule and cost – I think you will get more traction than speaking about how developers feel about the resulting design. Those things have real impact. A PM ought to know better than to introduce stuff like this on the fly with no governance, no controls, and no consensus. If they just don't get it, you may need to escalate to product management or program management, as (s)he is putting quality and schedule at risk unnecessarily.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/352596", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/277682/" ] }
352,672
I read the first chapters of Clean Code by Robert C. Martin, and it seems to me it's pretty good, but I have a doubt, in one part it is mentioned that it is good (cognitively) that the functions should have as few parameters as possible, it even suggests that 3 or more parameters is too much for a function (which I find very exaggerated and idealistic), so I started to wonder... Both the practices of using global variables and passing many arguments on the functions would be bad programming practices, but the use of global variables can greatly reduce the number of parameters in the functions... So I wanted to hear what you think about it, is it worth using global variables to reduce the number of parameters of the functions or not? In what cases would it be? What I think is that it depends on several factors: Source code size. Number of parameters in average of the functions. Number of functions. Frequency in which the same variables are used. In my opinion if the source code size is relatively small (like less than 600 lines of code), there are many functions, the same variables are passed as parameters and the functions have many parameters, then using global variables would be worth, but I would like to know... Do you share my opinion? What do you think of other cases where the source code is bigger, etc.? P.S . I saw this post , the titles are very similar, but he doesn't ask what I want to know.
I don't share your opinion. In my opinion using global variables is a worse practice than more parameters irrespective of the qualities you described. My reasoning is that more parameters may make a method more difficult to understand, but global variables can cause many problems for the code including poor testability, concurrency bugs, and tight coupling. No matter how many parameters a function has, it won't inherently have the same problems as global variables. ...the same variables are passed as parameters It may be a design smell. If you have the same parameters being passed to most functions in your system, there may be a cross-cutting concern that should be handled by introducing a new component. I don't think passing the same variable to many functions to be sound reasoning to introduce global variables. In one edit of your question you indicated that introducing global variables might improve the readability of code. I disagree. Usage of global variables is hidden in the implementation code whereas function parameters are declared in the signature. Functions should ideally be pure. They should only operate on their parameters and should not have any side-effects. If you have a pure function, you can reason about the function by looking just at one function. If your function is not pure, you must consider the state of other components, and it becomes much more difficult to reason about.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/352672", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/158648/" ] }
352,702
The problem: Since long time, I am worried about the exceptions mechanism, because I feel it does not really resolve what it should. CLAIM: There are long debates outside about this topic, and most of them struggle at comparing exceptions vs returning an error code. This is definitively not the topic here. Trying to define an error, I would agree with CppCoreGuidelines, from Bjarne Stroustrup & Herb Sutter An error means that the function cannot achieve its advertised purpose CLAIM: The exception mechanism is a language semantic for handling errors. CLAIM: To me, there is "no excuse" to a function for not achieving a task: Either we wrongly defined pre/post conditions so the function cannot ensure results, or some specific exceptional case is not considered important enough for spending time in developing a solution. Considering that, IMO, the difference between normal code and error code handling is (before implementation) a very subjective line. CLAIM: Using exceptions to indicate when a pre or post condition is not keep is another purpose of the exception mechanism, mainly for debugging purpose. I do not target this usage of exceptions here. In many books, tutorials and other sources, they tend to show error handling as a quite objective science, that is solved with exceptions and you just need to catch them for having a robust software, able to recover from any situation. But my several years as a developer make me to see the problem from a different approach: Programmers tends to simplify their task by throwing exceptions when the specific case seem too rare to be implemented carefully. Typical cases of this are: out of memory issues, disk full issues, corrupted file issues, etc. This might be sufficient, but is not always decided from an architectural level. Programmers tends not reading carefully documentation about exceptions in libraries, and are usually not aware of which and when a function throws. Furthermore, even when they know, they don't really manage them. Programmers tends not catching exceptions early enough, and when they do, it is mostly to log and throw further. (refer to first point). This has two consequences: Errors happening frequently are detected early in development and debugged (which is good). Rare exceptions are not managed and make the system to crash (with a nice log message) at the user home. Some times the error is reported, or not even. Considering that, IMO the main purpose of an error mechanism should be: Make visible in code where some specific case is not managed. Communicate the issue runtime to related code (at least the caller) when this situation happens. Provides recovery mechanisms The main flaw of the exception semantic as an error handling mechanism is IMO: it is easy to see where a throw is in the source code, but absolutely not evident to know if a specific function could throw by looking at the declaration. This bring all the problem that I introduced above. The language do not enforce and check the error code as strictly as it make for other aspects of the language (e.g. strong types of variables) A try for solution In the intention of improving this, I developed a very simple error handling system, which tries to put the error handling at the same level of importance than the normal code. The idea is: Each (relevant) function receive a reference to a success very light object, and may set it to an error status in case. The object is very light until a error with text is saved. A function is encouraged to skip its task if the object provided contain already an error. An error must never be override. The full design obviously consider thoroughly each aspect (about 10 pages), also how to apply it to OOP. Example of the Success class: class Success { public: enum SuccessStatus { ok = 0, // All is fine error = 1, // Any error has been reached uninitialized = 2, // Initialization is required finished = 3, // This object already performed its task and is not useful anymore unimplemented = 4, // This feature is not implemented already }; Success(){} Success( const Success& v); virtual ~Success() = default; virtual Success& operator= (const Success& v); // Comparators virtual bool operator==( const Success& s)const { return (this->status==s.status && this->stateStr==s.stateStr);} virtual bool operator!=( const Success& s)const { return (this->status!=s.status || this->stateStr==s.stateStr);} // Retrieve if the status is not "ok" virtual bool operator!() const { return status!=ok;} // Retrieve if the status is "ok" operator bool() const { return status==ok;} // Set a new status virtual Success& set( SuccessStatus status, std::string msg=""); virtual void reset(); virtual std::string toString() const{ return stateStr;} virtual SuccessStatus getStatus() const { return status; } virtual operator SuccessStatus() const { return status; } private: std::string stateStr; SuccessStatus status = Success::ok; }; Usage: double mySqrt( Success& s, double v) { double result = 0.0; if (!s) ; // do nothing else if (v<0.0) s.set(Error, "Square root require non-negative input."); else result = std::sqrt(v); return result; } Success s; mySqrt(s, 144.0); otherStuff(s); saveStuff(s); if (s) /*All is good*/; else cout << s << endl; I used that in many of my (own) code and it force the programmer (me) to think further about possible exceptional cases and how to solve them (good). However, it has a learning curve and don't integrate well with code that do now use it. The question I would like to understand better the implications of using such a paradigm in a project: Is the premise to the problem correct? or Did I missed something relevant? Is the solution a good architectural idea? or the price is too high? EDIT: Comparison between methods: //Exceptions: // Incorrect File f = open("text.txt"); // Could throw but nothing tell it! Will crash save(f); // Correct File f; try { f = open("text.txt"); save(f); } catch( ... ) { // do something } //Error code (mixed): // Incorrect File f = open("text.txt"); //Nothing tell you it may fail! Will crash save(f); // Correct File f = open("text.txt"); if (f) save(f); //Error code (pure); // Incorrect File f; open(f, "text.txt"); //Easy to forget the return value! will crash save(f); //Correct File f; Error er = open(f, "text.txt"); if (!er) save(f); //Success mechanism: Success s; File f; open(s, "text.txt"); save(s, f); //s cannot be avoided, will never crash. if (s) ... //optional. If you created s, you probably don't forget it.
CLAIM: The exception mechanism is a language semantic for handling errors exceptions are a control-flow mechanism. The motivation for this control-flow mechanism, was specifically separating error handling from non-error handling code, in the common case that error handling is very repetitive and bears little relevance to the main part of the logic. CLAIM: To me, there is "no excuse" to a function for not achieving a task: Either we wrongly defined pre/post conditions so the function cannot ensure results, or some specific exceptional case is not considered important enough for spending time in developing a solution Consider: I try to create a file. The storage device is full. Now, this isn't a failure to define my preconditions: you can't use "there must be enough storage" as a precondition in general, because shared storage is subject to race conditions that make this impossible to satisfy. So, should my program somehow free some space and then proceed successfully, otherwise I'm just too lazy to "develop a solution"? This seems frankly nonsensical. The "solution" to managing shared storage is outside the scope of my program , and allowing my program to fail gracefully, and be re-run once the user has either released some space, or added some more storage, is fine . What your success class does is interleave error-handling very explicitly with your program logic. Every single function needs to check, before running, whether some error already occurred which means it shouldn't do anything. Every library function needs to be wrapped in another function, with one more argument (and hopefully perfect forwarding), which does exactly the same thing. Note also that your mySqrt function needs to return a value even if it failed (or a prior function had failed). So, you're either returning a magic value (like NaN ), or injecting an indeterminate value into your program and hoping nothing uses that without checking the success state you've threaded through your execution. For correctness - and performance - it's much better to pass control back out of scope once you can't make any progress. Exceptions and C-style explicit error checking with early return both accomplish this. For comparison, an example of your idea which really does work is the Error monad in Haskell. The advantage over your system is that you write the bulk of your logic normally, and then wrap it in the monad which takes care of halting evaluation when one step fails. This way the only code touching the error-handling system directly is the code that might fail (throw an error) and the code that needs to cope with the failure (catch an exception). I'm not sure that monad style and lazy evaluation translate well to C++ though.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/352702", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/101713/" ] }
352,783
I'm designing a new web application which is powered by a REST backend and HTML+JS frontend. There's one POST method on it to change one entity (let's call Config), that has several side effects in the state of many elements of the application. Let's suppose the POST is performed this way: POST /api/config BODY {config: ....} Because of this, I would like to show a preview before those changes are made, for the end user to be able to notice what's going to change. The thing I first thought about is to make a GET endpoint for the preview, sending the body of the new state of the entity. This way: GET /api/preview/items BODY {config: ....} Might show the new state for the items with the new configuration. GET /api/preview/sales BODY {config: ....} Might show the new state for the sales with the new configuration. It seems a good idea to use the GET verb as I'm not altering the state of the application. However, the use of a request body with GET requests seems to be discouraged . Is there any good practice about this? Other choice might be to store the config as a draft with one method and display the results with others, but it would require an additional step and having to manage the drafts in the server: POST /api/preview/config BODY {config: ....} GET /api/preview/items?idPreviewConfig=1
This is too domain-specific to have a native support in HTTP. Instead, you may do one of the following: Have a POST /api/config/preview . At server side, the application will know that it shouldn't modify the actual configuration, but combine the actual one with the one you posted, and return the result indicating what was changed. Later, if the user is satisfied with the result, she will perform a POST /api/config containing the same payload as in the previous request. This will effectively overwrite the configuration. The benefit of this approach is that you'are not making any breaking changes to the current API. Clients who don't need the preview feature would still be able to update the entries as they did before. The drawback is that when the body is large, it would mean that it would be needed to send it twice to the server. If this is your case, you may use the next approach. Have a POST /api/config/prepare which remembers what was sent in a temporary record and returns two things: the ID of the temporary record (for instance 12345 ) and the preview of the changes. If the user is satisfied with the result, she will perform a POST /api/config/commit/12345 to definitively store the changes. If not, the temporary record may be kept for some time, and then discarded by a cron job. The benefit is that, here again, you may keep the original POST /api/config intact, and the clients which don't need a preview will not break. The drawbacks are that (1) handling the removal of temporary records can be tricky (what makes you think that one hour is enough? What if ten minutes later, you run out of memory? How clients handle a HTTP 404 when doing a commit of a record which expired?) and that (2) two-steps submission of a record may be more complicated than it needs to be. Move the preview logic on client side.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/352783", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/97167/" ] }
352,896
I see and work with a lot of software, written by a fairly large group of people. LOTS of times, I see integer type declarations as wrong. Two examples I see most often: creating a regular signed integer when there can be no negative numbers. The second is that often the size of the integer is declared as a full 32 bit word when much smaller would do the trick. I wonder if the second has to do with compiler word alignment lining up to the nearest 32 bits but I'm not sure if this is true in most cases. When you create a number, do you usually create it with the size in mind, or just create whatever is the default "int"? edit - Voted to reopen, as I don't think the answers adequately deal with languages that aren't C/C++, and the "duplicates" are all C/C++ base. They fail to address strongly typed languages such as Ada, where there cannot be bugs due to mismatched types...it will either not compile, or if it can't be caught at compile time, will throw an exception. I purposely left out naming C/C++ specifically, because other languages treat different integers much differently, even though most of the answers seem to be based around how C/C++ compilers act.
Do you see the same thing? Yes, the overwhelming majority of declared whole numbers are int . Why? Native ints are the size your processor does math with*. Making them smaller doesn't gain you any performance (in the general case). Making them larger means they maybe (depending on your processor) can't be worked on atomically, leading to potential concurrency bugs. 2 billion and change is big enough to ignore overflow issues for most scenarios. Smaller types mean more work to address them, and lots more work if you guess wrong and you need to refactor to a bigger type. It's a pain to deal with conversion when you've got all kinds of numeric types. Libraries use ints. Clients use ints. Servers use ints. Interoperability becomes more challenging, because serialization often assumes ints - if your contracts are mismatched, suddenly there are subtle bugs that crop up when they serialize an int and you deserialize a uint . In short, there's not a lot to gain, and some non-trivial downsides. And frankly, I'd rather spend my time thinking about the real problems when I'm coding - not what type of number to use. *- these days, most personal computers are 64 bit capable, but mobile devices are dicier.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/352896", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/7959/" ] }
352,911
I had a friend who said: Docker is amazing. You can use it to replicate production and all its quirks on your local machine. Then you can deploy that instance straight through all the staging workflows super- quick . Now this would be true if the developers were writing Ruby, PHP or Go - where there was a direction binary link to the operating system. But when using Java - there is already a virtual layer between the operating system and the language, making consistency of operation regardless of the underlying operating system. Arguably, in this case, the benefits of running Docker for developers locally to replicate the production environment are negated . (Compared to Ruby, PHP or Go). I'm open to discussion on this and am keen to hear a dissenting point of view (with evidence). Are the development benefits of using Docker negated when using Java compared to other languages closer to Unix binaries?
Not at all. Imagine you're running the version 1.8.0 of Java on both your development machine and the server. By the way, you're working simultaneously on two projects, both using Java. One day, a bug is found in JVM, and the servers which run the first project you're working on are migrated to 1.8.1. By the way, the servers running the second project aren't affected by the bug, and are managed by a different team of system administrators, who may not be willing to update to 1.8.1. Now, at least for one of the projects, you're running a different version of Java. This may not bother you too much (until one server migrates to 1.9, while the other one keeps the old version), but this would mean that you're not replicating production environment any longer on your local machine, which makes it possible for tiny bugs to creep in. If you imagine that your file system, your dependencies, your security settings, your local configuration and your version of Linux itself differ from production, you are putting yourself at risk of writing code which will fail in production. Instead of taking this risk, you could be using virtualization or Docker, with minor to no productivity loss.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/352911", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/13382/" ] }
353,073
I want to add error handling to: var firstVariable = 1; var secondVariable = firstVariable; The below won't compile: try { var firstVariable = 1; } catch {} try { var secondVariable = firstVariable; } catch {} Why is it necessary for a try catch block to affect the scope of variables as other code blocks do? Consistency-sake aside, wouldn't it make sense for us to be able to wrap our code with error handling without the need to refactor?
What if your code was: try { MethodThatMightThrow(); var firstVariable = 1; } catch {} try { var secondVariable = firstVariable; } catch {} Now you'd be trying to use an undeclared variable ( firstVariable ) if your method call throws. Note : The above example specifically answers the original question, which states "consistency-sake aside". This demonstrates that there are reasons other than consistency. But as Peter's answer shows, there is also a powerful argument from consistency, which would for sure have been a very important factor in the decision.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/353073", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/53014/" ] }
353,078
I'm in my first two months as a software engineer and just wanted to get advice on if this can be improved upon. I've created a class that represents data from RFID in the form of a message: class RFIDMessage { string _a; string _b; string _c; string _d; string _e; string _f; string _g; string _h; public RFIDMessage(string a, string b, string c, string d, string e, string f, string g, string h) { _a = a; _b = b; _c = c; _d = d; _e = e; _f = f; _g = g; _h = h; } } I've been reading about the Single Responsibility Principle and can't work out if this constructor can be improved upon by providing it a new data type. This message is parsed from the data read from an RFID tag and having eight parameters seems a little too many. The parameters won't change in this instance but there are other message types - this is just the default one. My questions are: 1) Is there a way to reduce the number of parameters that makes the code more maintainable? 2) How can I change or improve the design of this class to accommodate more types of messages?
You are right, 8 parameters -all strings- makes the constructor a good candidate for a code review. Consider some of the next points. Essential attributes first Look the message model and figure out which attributes are necessary to initialize an instance in a consistent state. Reduce the number of arguments to the essential. Add setters or functions for the rest. If all 8 attributes are required and read-only, there's not too much to do. Encapsulation Consider encapsulating correlated parameters. For example, A, B and C might be placed together into a new class. Find out which parameters change together, at the same time, due to the same reasons. It reduces the number of arguments at the cost of one more class (complexity). Use creational patterns Instead of initializing messages directly from any place in the code source, do It from factories or builders . Arrays If none of the above works, try an array of parameters. Given the lack of meaningful param's names, it's probably the simplest solution. Regarding arrays, I posted a question where I ask about its suitability. I was reluctant to trust in such a solution. However, the answers helped me out to be less reticent, so check it out if you also dislike this solution. It might change your mind about this. Inheritance Eventually, you will realise that messages are good candidates for inheritance. Segmenting messages by attribute generates a little overhead because sooner than later you ends up asking for the type if(message.getType() == ...) all over the code.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/353078", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/277243/" ] }
353,086
Following REST principles, I would want to create a GET method for my API that make a search using some criteria and return the results to the client. The problem is that the criteria can have up to 14 parameters, one of them is a list of complex objects, so... I don't even know if it possible to encode/decode these complex objects to/from url parameters. I didn't calculate how long the url could get but I'm sure it will be large enough and maybe reach the url length limit? Also, the search should show the results in "real time", I mean, every time the user changes something from the search form he should be able to see the new results without pressing any "search" button. Could you clarify me these points and what would be your advice to create a restful search method with a lot of parameters? update: now that I have more experience, I've realized that having to do a search function with A LOT of parameters for a web app is actually something that designates bad design and bad requirements, and @Neil answer has the point. So my advice is do what you can within the scope. Nevertheless, the @Laiv answer is still good for less extreme cases, so take it into account too.
Before you read my answer, I would like to say that I agreed with @Neil. We have to pick our battles. We usually want to do our best, but sometimes there's too little room for discussion and we have to make decisions against our will. Anyways, in Neil's answer, I miss one more thing. Documentation . Just to ensure that developers know that POST requests to /search are safe. That said. 1. Give GET a chance Consider the GET option first. Check out this question URL's max length . Evaluate whether your longest query string is longer than 2000 characters. If it doesn't, and you don't expect it to be, go with GET . It might seem ugly but it has all the advantages derived from the method' semantics (idempotence, safe and caching). And bookmarking. 1.1 Try encoding the query string For example, in base 64. Even javascript supports base 64 encodings . This's how it works: Build the JSON with all the filters and normalise it. Parse it to string Encode it Send the encoded JSON as request param ( /search?q=SGVsbG8gV29ybGQh.... ). On the server-side, decode q . Deserialize the JSON string Previously, make the longest possible JSON string, encode it and take the length. Evaluate if the encoded string fits in the URL. I have implemented the following snippet on Fiddle.js for you to test. (I hope it still works) 1 Base 64 encodes are deterministic and reversible, so there's no chance for collisions. With encoded queries, we could also save searches in the DB, bookmark the URL too, share links, etc. And, of course, we don't have to escape/unescape the string (something I dislike). 1.2 Try with aliases Reading this blog about how to design REST APIs, I remembered one more alternative. Aliases for common queries . I find these to be interesting for the next reasons Shorten the query string length. It makes the API cleaner and user-friendly GET /tickets/?status=closed&closedAt=xxx vs GET /tickets/recently-closed/ Combinable with more aliases or more request parameters. GET /tickets/?status=closed&closedAt=xxx&within=30min vs GET /tickets/recently-closed/?within=30min We can combine aliases with encoded query strings GET /tickets/?status=closed&closedAt=xxx&within=30min vs GET /tickets/recently-closed/?q=SGVsbG8g... 1: I have used JSON, but we could use other formats as soon as we can deserialize these on the server-side.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/353086", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/262178/" ] }
354,384
All the examples I've read and seen on training videos have simplistic examples. But what I don't see if how I do the "real" code after I get green. Is this the "Refactor" part? If I have a fairly complex object with a complex method, and I write my test and the bare minimum to make it pass (after it first fails, Red). When do I go back and write the real code? And how much real code do I write before I retest? I'm guessing that last one is more intuition. Edit: Thanks to all who answered. All your answers helped me immensely. There seems to be different ideas on what I was asking or confused about, and maybe there is, but what I was asking was, say I have an application for building a school. In my design, I have an architecture I want to start with, User Stories, so on and so forth. From here, I take those User Stories, and I create a test to test the User Story. The User says, We have people enroll for school and pay registration fees. So, I think of a way to make that fail. In doing so I design a test Class for class X (maybe Student), which will fail. I then create the class "Student." Maybe "School" I do not know. But, in any case, the TD Design is forcing me to think through the story. If I can make a test fail, I know why it fails, but this presupposes I can make it pass. It is about the designing. I liken this to thinking about Recursion. Recursion is not a hard concept. It may be harder to actually keep track of it in your head, but in reality, the hardest part is knowing, when the recursion "breaks," when to stop (my opinion, of course.) So I have to think about what stops the Recursion first. It is only an imperfect analogy, and it assumes that each recursive iteration is a "pass." Again, just an opinion. In implementation, The school is harder to see. Numerical and banking ledgers are "easy" in the sense you can use simple arithmetic. I can see a+b and return 0, etc. In the case of a system of people, I have to think harder on how to implement that. I have the concept of the fail, pass, refactor (mostly because of study and this question.) What I do not know is based upon lack of experience, in my opinion. I do not know how to fail signing up a new student. I do not know how to fail someone typing in a last name and it being saved to a database. I know how to make a+1 for simple math, but with entities like a person, I don't know if I'm only testing to see if I get back a database unique ID or something else when someone enters a name in a database or both or neither. Or, maybe this shows I am still confused.
If I have a fairly complex object with a complex method, and I write my test and the bare minimum to make it pass (after it first fails, Red). When do I go back and write the real code? And how much real code do I write before I retest? I'm guessing that last one is more intuition. You don't "go back" and write "real code". It's all real code. What you do is go back and add another test that forces you to change your code in order to make the new test pass. As for how much code do you write before you retest? None. You write zero code without a failing test that forces you to write more code. Notice the pattern? Let's walk through (another) simple example in hopes that it helps. Assert.Equal("1", FizzBuzz(1)); Easy peazy. public String FizzBuzz(int n) { return 1.ToString(); } Not what you would call real code, right? Let's add a test that forces a change. Assert.Equal("2", FizzBuzz(2)); We could do something silly like if n == 1 , but we'll skip to the sane solution. public String FizzBuzz(int n) { return n.ToString(); } Cool. This will work for all non-FizzBuzz numbers. What's the next input that will force the production code to change? Assert.Equal("Fizz", FizzBuzz(3)); public String FizzBuzz(int n) { if (n == 3) return "Fizz"; return n.ToString(); } And again. Write a test that won't pass yet. Assert.Equal("Fizz", FizzBuzz(6)); public String FizzBuzz(int n) { if (n % 3 == 0) return "Fizz"; return n.ToString(); } And we now have covered all multiples of three (that aren't also multiples of five, we'll note it and come back). We've not written a test for "Buzz" yet, so let's write that. Assert.Equal("Buzz", FizzBuzz(5)); public String FizzBuzz(int n) { if (n % 3 == 0) return "Fizz"; if (n == 5) return "Buzz" return n.ToString(); } And again, we know there's another case we need to handle. Assert.Equal("Buzz", FizzBuzz(10)); public String FizzBuzz(int n) { if (n % 3 == 0) return "Fizz"; if (n % 5 == 0) return "Buzz" return n.ToString(); } And now we can handle all multiples of 5 that aren't also multiples of 3. Up until this point, we've been ignoring the refactoring step, but I see some duplication. Let's clean that up now by introducing a helper function. private bool isDivisibleBy(int divisor, int input) { return (input % divisor == 0); } public String FizzBuzz(int n) { if (isDivisibleBy(3, n)) return "Fizz"; if (isDivisibleBy(5, n)) return "Buzz" return n.ToString(); } Cool. Now we've removed the duplication and created a well named function. What's the next test we can write that will force us to change the code? Well, we've been avoiding the case where the number is divisible by both 3 and 5. Let's write it now. Assert.Equal("FizzBuzz", FizzBuzz(15)); public String FizzBuzz(int n) { if (isDivisibleBy(3, n) && isDivisibleBy(5, n)) return "FizzBuzz"; if (isDivisibleBy(3, n)) return "Fizz"; if (isDivisibleBy(5, n)) return "Buzz" return n.ToString(); } The tests pass, but we have more duplication. We have options, but I'm going to apply "Extract Local Variable" a few times so that we're refactoring instead of rewriting. public String FizzBuzz(int n) { var isDivisibleBy3 = isDivisibleBy(3, n); var isDivisibleBy5 = isDivisibleBy(5, n); if ( isDivisibleBy3 && isDivisibleBy5 ) return "FizzBuzz"; if ( isDivisibleBy3 ) return "Fizz"; if ( isDivisibleBy5 ) return "Buzz" return n.ToString(); } And we've covered every reasonable input, but what about unreasonable input? What happens if we pass 0 or a negative? Write those test cases. public String FizzBuzz(int n) { if (n < 1) throw new InvalidArgException("n must be >= 1"); var isDivisibleBy3 = isDivisibleBy(3, n); var isDivisibleBy5 = isDivisibleBy(5, n); if ( isDivisibleBy3 && isDivisibleBy5 ) return "FizzBuzz"; if ( isDivisibleBy3 ) return "Fizz"; if ( isDivisibleBy5 ) return "Buzz" return n.ToString(); } Is this starting to look like "real code" yet? More importantly, at what point did it stop being "unreal code" and transition to being "real"? That's something to ponder on... So, I was able to do this simply by looking for a test that I knew wouldn't pass at each step, but I've had a lot of practice. When I'm at work, things aren't ever this simple and I may not always know what test will force a change. Sometimes I'll write a test and be surprised to see it already passes! I highly recommend that you get in the habit of creating a "Test List" before you get started. This test list should contain all the "interesting" inputs you can think of. You might not use them all and you'll likely add cases as you go, but this list serves as a roadmap. My test list for FizzBuzz would look something like this. Negative Zero One Two Three Four Five Six (non trivial multiple of 3) Nine (3 squared) Ten (non trivial multiple of 5) 15 (multiple of 3 & 5) 30 (non trivial multiple of 3 & 5)
{ "source": [ "https://softwareengineering.stackexchange.com/questions/354384", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/8802/" ] }
354,425
So I've been programming for a few years now and recently have started using ReSharper more. One thing that ReSharper always suggests to me is to "invert 'if' statement to reduce nesting". Let's say I have this code: foreach (someObject in someObjectList) { if(someObject != null) { someOtherObject = someObject.SomeProperty; } } And ReSharper will suggest that I do this: foreach (someObject in someObjectList) { if(someObject == null) continue; someOtherObject = someObject.SomeProperty; } It seems that ReSharper will ALWAYS suggest that I invert IFs, no matter how much nesting is going on. This problem is that I kind of like nesting in at least SOME situations. To me is seems easier to read and figure out what is going on in certain places. This is not always the case, but I feel more comfortable nesting sometimes. My question is: other than personal preference or readability, is there a reason to reduce nesting? Is there a performance difference or something else that I may not be aware of?
It depends. In your example having the non-inverted if statement is not a problem, because the body of the if is very short. However you usually want to invert the statements when the bodies grow. We are only humans and keeping multiple things at a time in our heads is difficult. When the body of a statement is large, inverting the statement not only reduces nesting but it also tells the developer they can forget about everything above that statement and focus only on the rest. You are very likely to use the inverted statements to get rid of wrong values as soon as possible. Once you know those are out of the game the only thing that's left are values that can actually be processed.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/354425", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/40499/" ] }
354,885
Which arguments should someone consider when designing a new system and has to either store the name of a person as one field or separately as first/last name? Pros for single field: Simpler UI No ambiguity when trying to enter the name of a person, who has a very long name (often non evident which is the last name / first name..) Less complexity when handling titles (e.g. no need for separate field to enter "M.D" or "Dr.") Pros for split field: Personalised communication is possible "Dear Mr X" or "Dear Julie" If a consumed web service needs the first / last name separately, it can be provided easily. Better choice for any industry with strict identification requirements (e.g. medical, government, etc.) Safer choice, as you can always go back to the single field alternative Do you see any additional argument that is not listed above? Update: the question is, what additional (=not listed in the question) arguments can be listed for each solution. I think giving opinions instead of possible pros and cons drives the discussion in the wrong way. Each developer has to make his/her decision about this problem, the aim of this question is to assemble a list of non-trivial arguments that can be evaluated if needed.
First name and last name are not useful concepts. Names work differently in different countries. In most Asian countries, the family name is written first, but it is still used for sorting—so you may put it in first name, and sorting will be wrong, or in last name, and display will be. And then there are countries like Iceland where they don't use family names at all, but instead father's given name. So they simply sort by given name. The terms “given name” and “surname” (or “family name”) are better in this regard, but I would still avoid them unless absolutely necessary (i.e. official documents like passports do have them, so then you need them), because they just make things more complicated. Personalised communication is possible "Dear Mr X" or "Dear Julie" Except you have no idea whether to call given person by their first name, or last name or what. And don't get me started on the languages that have vocative—you can't derive vocative from nominative in general. No, it's better if you simply ask the user what to call them. If a consumed web service needs the first / last name separately, it can be provided easily. If . If you depend on another service, you are locked to their bad choices. It is no advantage for your own designs. Better choice for any industry with strict identification requirements (e.g. medical, government, etc.) No, it is a wrong choice for these. Official documents generally use the terms “given name” and “surname” (or “family name”), which are less ambiguous. Safer choice, as you can always go back to the single field alternative Actually, due to the ambiguity with Asian names, it is not so clear you can.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/354885", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/279445/" ] }
354,901
Situation: My app exists as standard and branded version, the branded version has some tiny additional features but most stuff is the same. Right now i have a basic library with popups and visual elements that i can use in every app i make. Then there are several modules (libraries) (features of the app "things one can do") and two final app projects. One module, in the branded version can use auxilliary hardware (ah). So my idea was to split up the module into a library base-module and create an additional library for the ah and then make two module one with and one without ah. This way any basic change to the module would be incorporated into all apps but the branded one would chose the module with ah and the standard app would take the one without. There is a chance that a second branded version will come soon, with some modules different and maybe different hardware used by some modules. I would have to split up even more and i fear for chaos. But the only other way i see is that i create omni-modules that can handle every possible variation and will be instantiated with a selection of which features to contain. But it is clear that "brand a" will never use hardware of "brand b" and vice versa, so the selection will really just be "standard, a or b" but the module is built to handle them all which might bloat up the module. So how would i split up my libraries and or module to handle this without much chaos ? edit My main concern is long term maintenance. After 2 years someone wants to add a feature to brand version xyz and i have to figure out into which library/at what level to implement it to not touch other versions. Having many libs may make it hard to figure out the relations/dependencies between them but that can be compensated by proper documentation. Having it all in one lib/module means that i need to follow variables that enable/disable features to find out where they change/do stuff. But how far do i split stuff up?
First name and last name are not useful concepts. Names work differently in different countries. In most Asian countries, the family name is written first, but it is still used for sorting—so you may put it in first name, and sorting will be wrong, or in last name, and display will be. And then there are countries like Iceland where they don't use family names at all, but instead father's given name. So they simply sort by given name. The terms “given name” and “surname” (or “family name”) are better in this regard, but I would still avoid them unless absolutely necessary (i.e. official documents like passports do have them, so then you need them), because they just make things more complicated. Personalised communication is possible "Dear Mr X" or "Dear Julie" Except you have no idea whether to call given person by their first name, or last name or what. And don't get me started on the languages that have vocative—you can't derive vocative from nominative in general. No, it's better if you simply ask the user what to call them. If a consumed web service needs the first / last name separately, it can be provided easily. If . If you depend on another service, you are locked to their bad choices. It is no advantage for your own designs. Better choice for any industry with strict identification requirements (e.g. medical, government, etc.) No, it is a wrong choice for these. Official documents generally use the terms “given name” and “surname” (or “family name”), which are less ambiguous. Safer choice, as you can always go back to the single field alternative Actually, due to the ambiguity with Asian names, it is not so clear you can.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/354901", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/147886/" ] }
354,911
Assume we have a function that updates a User's password. Once the 'Update Password' button is clicked, an UpdatePasswordEvent is sent to a topic where 3 other services are subscribed: A service that actually updates the User's password A service that updates the password history of the user A service that sends out an e-mail informing the user that his password has been changed. Based from what I've understood about eventual consistency, all these services (consumers) will receive the event at the same time and process them separately which, in a good scenario, will lead to data being consistent. However, what if a service fails to process the event? e.g. sudden disconnect, database error, etc... What is a good pattern/practice to handle these transaction failures? I was thinking of creating a RollbackTopic where if any event fails to be processed, a RollbackEvent will be created in a topic where "rollback services" will do it's job and revert the data back
Based on what I've understood about eventual consistency, all these services (consumers) will receive the event at the same time and process them separately which, in a good scenario, will lead to data being consistent. Yes, but as I commented, we can't undo an email notification so we still need a sort of "sequence". Event-driven data management is not exempt of some sort of orchestration 1 . For instance, the email should not be sent unless the previous transactions finish successfully and the email service gets proof of it. 3 However, what if a service fails to process the event? e.g. sudden disconnect, database error, etc... What is a good pattern/practice to handle these transaction failures? Say Hello! to the fallacies of the distributed computing . They are what make things complicated and, as usual, there're no silver bullets to deal with them. Before starting our particular search of the Lost Ark, we have to consider asking the organization first. Often, the solution is in how the organization deal with these problems in the real world . What do we (the company) do when certain data is missing or incomplete? We'll come to realise that different departments have different ways to handle the situation. These ways guide the final solution. Here some practices that could help. Eventual consistency Instead of ensuring that the system is in a consistent state all the time, we can accept that the system will be at some point in the future. This approach is especially useful for long-living business operations. The way for the system to reach consistency varies from system to system. It might involve automated processes or some kind of human intervention. For instance, the typical trying It again later or the contact with Customer Service . Abort all the operations Put the system back into a consistent state via compensating transactions . However, we have to take into account that, these transactions can fail too, which could lead us to a point where the inconsistency is even harder to get solved. And, again, we can not undo a sent email. For a low number of transactions, this approach is feasible, because the number of compensating transactions is low too. If there were several business transactions involved in the IPC, handling one compensating transaction for each of them would be challenging. If we go for compensating transactions , we'll find circuit breaker design pattern to be very useful. Distributed transactions The idea is to span multiple transactions within a single transaction, through an overall governing process known as Transaction Manager . A common algorithm for handling distributed transactions is Two-phase commit . The main concern is that transactions here rely on locking resources during the transaction lifetime, and as we know, things can go wrong for the Transaction Manager too. If the Transaction Manager gets compromised, we could end up with several locks all across the different bounded contexts, resulting in unexpected behaviours across the whole system. 2 Decomposing operations. Why? If you're decomposing an existing system, and find a collection of concepts that really want to be within a single transaction boundary, perhaps leave them till last. Sam Newman In the line with the above arguments, Sam -in his book Building Microservices - states that, if we really, really can not afford the eventual consistency, we should avoid splitting the operation now. If we can not afford to split certain operations into two or more transactions, it might come to say that -probably- these transactions belong to the same bounded context, or -at least- to an emergent and cross-cutting context. For example, in our case, we come to realise that transactions #1 and #2 are tightly related to one another and probably both could belong to the same bounded context Accounts , Users , Register , ... Consider placing both operations within the boundaries of the same transaction. It would make the whole operation easier to handle. Also, weigh the level of criticality of each transaction. Probably, if transaction #2 fails, it should not compromise the whole operation. In case of doubts ask the organization . 1: I'm not talking about ESB's orchestration. I'm talking about making services react to the proper event. It's rather a choreography. 2: You might find interesting Sam Newman's opinions regarding distributed transactions. 3: See Andy's answer regarding this subject.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/354911", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/151231/" ] }
355,103
I know we should have at least 3 different environments while developing a solution: Development : The programmers are free to change and push changes any time in order to quickly test their code and integrate with other changes, without the fear of breaking anything - this is connected to the TEST databases and services; UAT : Should be treated with reverence by the developers, as it should contain a "as good as possible" copy of the production environment regarding hardware, with the difference being that this environment is connected to UAT databases with an editable copy of production data - it's used both by the Q&A team and the users to validate changes that'll go to production Production : The real deal. I've looked into this question on SoftwareEngineering , and this question on ServerFault , and they seem to differ on what's the meaning of the Staging Environment. Also, Wikipedia page about the subject states that: The primary use of a staging environment is to test all installation/configuration/migration scripts and procedures, before they are applied to production environment. This ensures that all major and minor upgrades to the production environment will be completed reliably without errors, in minimum time. For me, Staging equals UAT, where you must test the application and deployment procedures before pushing to the real world. So, we push the package with the changes to UAT the same way we push to production, fully automated and with all the ceremony we should have with the production environment. That being said, what's the proper difference between an UAT environment and a Staging environment ? -- EDIT: Just to be clear, I'm thinking in terms of a Web Application, be it an internet website or an intranet website. No "forms" app or mobile app.
The difference is the data. A UAT environment is set up for "user acceptance" of new functionality. In order to test that functionality, QA or stakeholders may set up user profiles a particular way in order to exercise particular features, or may set up mock products or configurations to check them all out. A staging environment is often set up with a copy of production data, sometimes anonymized. Some corporations regularly "refresh" their staging database from a production snapshot. The primary focus is to ensure that the application will work in production the same way it worked in UAT. Instead of setting up data anew, testers will search the database for profiles and products that match an essential set of test cases. Often the "real" data have quirks in them that give rise to unexpected edge cases that were missed during UAT. Also, any data migration testing would need to take place in the staging environment.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/355103", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/9506/" ] }
355,176
JSON supports the following data structures (Java equivalents): Scalar, Array/List, and Map. A Set is not supported out-of-the-box in JSON. I thought about several ways to represent a set in JSON: [1] - As a list However, a list has its own ordering, so the following two lists, ["a", "b"] and ["b", "a"] are not equal as lists, but they should be equal as sets. [2] - As a map Use the key-set of the map, and ignore the values. But again, using standard comparison, the two are not the same as maps: {"a": "foo", "b": "bar"} , {"a": null, "b": null} [3] - As a map, with a special value Take a scalar, say 0 or null and force it to be the value of every key in the map: {"a": 0, "b": 0} This way, under standard comparison tools, the objects are equal, even if the key ordering is changed. However, this technique pollutes the JSON document with irrelevant data. [4] - As an ordered list Back to the first suggestion, but this time as an ordered list. This kind of solves the comparison issue. However, we should also put in mind the complexity of sorting, and also that map notation handles duplicates, while a sorted list does not. Example: {"a": 400, "a": 9} is handled as {"a": 9} , but ["g", "g"] would always be ["g", "g"] . Having said all that, it seems to me that the list notation is clearer, but the map notation is more robust to keys duplication, and make it harder to be consistent about the special value (even though null seems like a good choice for that). What do you think? How would you represent a set in JSON? P.S. Note that question this is merely about JSON. I know that other formats, like yaml, are available. Still...
Well, you can't. As you said, you can represent arrays and dictionaries. You have two choices. Represent the set as an array. Advantage: Converting from set to array and back is usually easy. Disadvantage: An array has an implied order, which a set doesn't, so converting identical sets to JSON arrays can create arrays that would be considered different. There is no way to enforce that array elements are unique, so a JSON array might not contain a valid set (obviously you could just ignore the duplicates; that's what is likely to happen anyway). Represent the set as a dictionary, with an arbitrary value per key, for example 0 or null. If you just ignore the values, this is a perfect match. On the other hand, you may have no library support for extracting the keys of a dictionary as a set, or for turning a set into a dictionary. In my programming environment, conversion between set and array is easier (array to set will lose duplicate values, which either shouldn't be there, or would be considered correct), so for that reason I would go with arrays. But that is very much a matter of opinion. BUT: There is a big fat elephant in the room that hasn't been mentioned. The keys in a JSON dictionary can only be strings. If your set isn't a set of strings, then you only have the choice of using an array.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/355176", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/16672/" ] }
355,203
Years ago, when I read The Mythical Man-Month, I found lots of stuff which I already knew from other sources. However, there were also new things in there, despite the book being from 1975. One of them was: The Surgical Team Mills proposes that each segment of a large job be tackled a team, but that the team be organized like a surgical team rather than a hog-butchering team. That is, instead of each member cutting away on the problem, one does the cutting and the others give him every support that will enhance his effectiveness and productivity. This is a very interesting pattern for organizing a software development team, but I never found it described in any other Software Engineering book, not even mentioned anywhere. Why is that? Was the "Surgical Team" even unusual back then? Or, has it been tried and failed? If so, how did it fail? If not, why don't we see that pattern implemented in today's software projects?
"The Mythical Man-Month" came out the year I started college and was, to use the current vernacular, UUUGE! :-) What you need to understand is the difference in how software was developed THEN vs. NOW. Back In The Day (tm) pretty much all coding was done on paper first, was then keypunched onto (you guessed it) punched cards, then was read in, compiled, linked, executed, results were obtained, and the process repeated. CPU time was an expensive and limited resource and you didn't want to waste it. Ditto and likewise disk space, tape drive time, etc, blah. Wasting perfectly good CPU time on a compile which resulted in (shock and horror!) errors was...well, a waste of perfectly good CPU time. And this was in 1975. At the time that Fred Brooks was developing his ideas, which was the mid-to-late 1960's CPU time was even more expensive, memory/disk/whatever was even MORE limited, etc, etc. The idea behind The Surgical Team was to ensure that the One Super Great Rockstar Developer did not have to waste HIS time on mundane tasks like desk-checking code, keypunching, submitting jobs, waiting around (sometimes for hours) for results. Rockstar Dude Developer Man was to be WRITING CODE. His legion of groupies/clerks/junior developers was supposed to do the mundane stuff. The problem was that within 2 years of Brooks' book being published the basic ideas behind The Surgical Team were breaking down: CRT terminals and disk files began to replace keypunches and card decks. Computer time became less expensive, multiple computers became available, and job turnaround time dropped dramatically. When I got to college (Miami University, Oxford, Ohio, class of '79, thanks for asking) good job turnaround was about an hour. During finals week - four hours, maybe, sometimes six. (We competed for CPU time with a bunch of commercial companies and universities - and the commercial users got first priority). During my senior year, by which point Miami had gotten out of their "shared computer" arrangement, had their own IBM 370/145 installed on campus, and had a nice HP mini I worked on that acted as an RJE station we could turn mainframe jobs around in five minutes or less. It was now worthwhile to bang your code in on the HP, send it from the HP to the mainframe, twiddle your thumbs/smoke a cigarette, and get your output back long before you could finish desk-checking your code. The Surgical Team has as its basic premise the idea that you (or "management", god help us all) can identify The Rockstar Surgical Developer Dude. In fact, I doubt that's possible. There are rockstar developers, everyone knows it - studies have shown differences in productivity between the best and worst developers of as much as 2000% - but identifying that person without having them write code over a long period of time is most likely impossible. The only way to know if someone is a rockstar developer is to have them actually develop code - but if they're NOT the Rockstar Surgical Developer Dude they'll be doing exciting things like desk-checking his code, keypunching it onto cards, and schlepping boxes of punched cards down to the Job Entry department, then standing around waiting for results so they can schlep them back to Mr. Rockstar Surgical Developer Dude instead of learning to code the only way that really works - by writing code, debugging code, and etc. Back In The Day (tm) there were no programming contests, there was no Stack Overflow, you didn't have a PC you could go write code on whenever you felt like it, there were no Algorithms For Idiots books - the only way to learn programming was to go to school and major in something where you got to do a bit of programming. But programming per se was not taken seriously, and it was assumed to be something people didn't want to do . In my first college course (SAN151 - Introduction To Systems Analysis, Dr. Tom Schaber - thanks, Tom :-) we were told by the instructor that "...we just had to face the fact that we'd have to spend a couple of years as programmers before we could become systems analysts". "Two years?", I thought. "I ONLY GET TO DO THIS FOR TWO YEARS?!?". I was seriously bummed. Thankfully he was wrong and I've been coding pretty much ever since. :-) The Surgical Team assumes that programmers are a relatively rare resource. It actually took a few more years, but with the advent of PC's in the early 80's programming became something that any geek could get involved in. The price of computers began to fall, the price of development tools began to fall, and it was all hail Turbo Pascal - by today's standards it wasn't much but at the time it was a complete Pascal IDE for about 40 bucks, which was absolutely nuts! Now ANYBODY could get into programming - if you could afford a computer, and when IBM decided to put the PCjr (yep, my first PC was one of IBM's biggest mistakes :-) on sale for about $500 to get rid of those turkeys, cash-strapped geeks everywhere skipped their rent payments for a month ("Yeah, uh, I know, but I, uh...broke my uuvula and had to have surgery and...uh...yeah, next week, no problem, thanks, man...) and sucked 'em up at fire-sale prices. Then spent more than we paid for the computer on add-ons to make it usable. ("Yeah, man, next week, for sure, probably..." :-). What makes me really sad is that even today, if you ask people if they've ever read "The Mythical Man-Month" or understand its principal lesson ("Adding resources to a late project makes it later") they give you a blank stare - and then proceed to make the exact same errors as were made All Those Years Ago during the development of OS/360. Everything old is new again... :-}
{ "source": [ "https://softwareengineering.stackexchange.com/questions/355203", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/69228/" ] }
355,275
I have been tasked with developing a java application that includes retrieving one property from a json response from a web server. Normally when I parse json I go with the standard way of creating a pojo model of the json and then using the google gson library to create the object. However this time I feel that it is overkill to have to create a pojo when I am just retrieving one property from the json, and I want to use a workaround that will just splice and dice the string until I am left with the property that I want. So my question is, would there be any downsides with doing such a workaround instead of going the standard way of using a pojo and gson?
Yes there is a downside: bugs in your half baked parser. Sure JSON is simple but you still can encounter tricky escaping situations. Just use an existing tried and true JSON parsing method and save your brain cells for the harder problems.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/355275", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/258776/" ] }
355,445
I'm working on a software project where we have to build three APIs. One for the home banking channel, one for the agency channel and a third for the mobile channel. The agency API is the most complete one as it has all the functionalities .. then a bit smaller Home API and then mobile API. The architects here made a common layer (cross channel EJB services shared by all APIs). But then the APIs are different. There is no big difference for now between the APIs. The big team started with the agency channel, and we are adapting it now for home channel. We are just enriching objects specifically to our home app. Otherwise, the code is 95% similar between APIs. The APIs is build on top of Spring MVC , and it has (controllers, models & some utilities). Basically the controllers are doing the mapping BO to ChannelObject (it seems to me not the right place to do that), and some extra utilities and serializers. All is duplicate for now. They are saying that the reason for duplication is they want the APIs independent. "If tomorrow we want a different behaviour for home than agency or mobile we won't struggle!!" Is there a case where we should accept duplicate code?
Duplication can be the right thing to do, but not for this reason. Saying "we might want these three places in the code base to behave different even though right now, they're identical" is not a good reason for large-scale duplication. This notion could apply to every system, and it could be used to justify any duplication, which is obviously not reasonable. Duplication should be tolerated only when removing it would be overall more costly now for some other reason (can't think of a good one right now, but be assured there can be one - virtually everything in programming is a trade-off rather than a law). For what you're doing, the right solution could be e.g. extracting the behaviour that is duplicated right now into a Strategy or some other pattern that models behaviour as classes and then use three instances of the same class. That way, when you do want to change the behaviour in one of the three places, you only have to create a new Strategy and instantiate that in one place. That way you only have to add some classes and leave the rest of the code base almost completely untouched.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/355445", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/269928/" ] }
355,580
I have a large calculation to do. While I can utilize all cores, I thought is there any reason to leave off 1 core and not to utilize it? (calculation cpu only no IO). Or am i underestimating the OS that it would not know to handle and do proper context switching even if I utilize all cores?
Major operating systems are mature enough to know how to handle processes which use every available core. Other processes may (and often will) be affected, but the computation won't become slower because you used every available core. The choice of the number of cores depends more on your intention of doing something else while the calculation is being performed. If, on a desktop machine, you want to be able to use your web browser or watch a video while the computation is being done, you'll better keep one core free for it. In the same way, if the server is doing two things (such as doing computations and, at the same time, processing and reporting its metrics), keeping a core free for the side task could be a good idea. On the other hand, if your priority is to make the computation as fast as possible, you have to use all the cores.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/355580", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/153865/" ] }
355,686
Backstory: I have been working as part of this team for the past three years and in this time we have had three different Scrum Master who have all run things differently. Because of this change in Scrum Masters and their way of running the show, it has left my team numb to the idea of Scrum because the principles haven't been enforced consistently and one of the Scrum Masters was a person who do not believe in agile development and just kept events and artifacts as a novice to comply with company decisions. Now my team members are annoyed and bored when we do Scrum events and one person in particular is very verbal about this. Present: Two months ago the company appointed me Scrum Master of my team because of my dedication to working agile and its principles. I'm suffering greatly under the atmospheric pressure of my team members unwillingness to do Scrum. As mentioned they are annoyed about the entire process which makes it very difficult for me because they are not engaging in the necessary conversations needed to make Planning, Retrospective, and Daily Scrum effective. To them, Planning is just a waste of time, because we just move overflow into the new Sprint and don't complete the work anyways, so why bother. During Retrospective I can just feel that they want to say "Stop doing Scrum". One person does, but the others are silent and I have to deal with this every time. Daily Scrum is again just a waste of time for them because none of them bothers to talk and plan the day. They just state "I worked on task X yesterday and will work on that again today." And most of the time they just joke around until I get more stern. I have been very large when it comes to how they spent their time during these events. But I'm dying on the inside because I have a passion for this and they don't care anymore. Today the person who's always against me told me to stop saying "They said this is what they committed to for this Sprint" because, in his words, "We never complete a Sprint. We just move in tasks and take in new ones in the next Sprint to fill up a quota. We do KanBan in reality. So stop saying that." I understand why he says this, but he doesn't seem to realize that this is how it is because him and everybody else on the team don't care. They just do work instead of dealing with impediments. They complain about the impediments, but don't do anything about them. And when I try to help they just shrug it off. They used to give a damn, but over the past two years their willingness has declined to more or less rock bottom. How can I make them see that joking around and wasting time during these meetings costs the company a lot of money?
You may have heard a lot of statistics about failed software projects and came to the conclusion that the failure is not of a technical nature. Technological problems can be solved via hundreds of technical solutions, but solving problems in your workplace atmosphere by using Scrum is not going to work. My suggestion here is to completely stop looking at this as a technical issue. It's not about Scrum, it's not about daily standups, sprints, retrospectives or anything else like that. You need to get in touch with your team and find an effective way of working that satisfies them as well as you and your superiors. If they think dailies are a bad idea, you should not tell them to do dailies and try to punch your reasoning into them. Think for yourself what it is that dailies offer to you. Check with your team whether they value those advantages as well. Find out why they do not share your understanding - as in understanding their point of view, not as in convincing them of anything. Then check whether dailies actually help your team, or if you can achieve more with some other mechanism. The funny thing about Scrum masters is that they are servants to their team - you may well serve them best by abolishing Scrum altogether. In summary, stop focusing on Scrum and instead get back to the basics of agility. You may want to start right at the beginning with the agile manifesto : Value individuals and interactions over processes and tools.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/355686", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/-1/" ] }
355,733
Most, if not all IT people I know believe that it is beneficial to model software with UML or other types of diagrams before coding. (My question is not about UML specifically, it could be any graphical or textual description of the software design.) I am not so sure about it. The main reason is: Code doesn't lie. It is checked by the compiler or interpreter. It hopefully has automated tests and needs to pass static code analysis. If a module does not interface correctly with another module, it is usually obvious in code because you get an error message. All of this cannot be done with diagrams and other documents. Yes, there are tools that check UML, but everything I've seen so far is very limited. Therefore these documents tend to be incomplete, inconsistent or simpy false. Even if the diagrams themselves are consistent, you cannot be sure that the code actually implements them. Yes, there are code generators, but they never generate all of the code. I sometimes feel like the obsession with modeling results from the assumption that code inevitably has to be some incomprehensible mess that architects, designers or other well-paid people who get the big picture should not have to deal with. Otherwise it would get way too expensive. Therefore all design decisions should be moved away from code. Code itself should be left to specialists (code monkeys) who are able to write (and maybe read) it but don't have to deal with anything else. This probably made sense when assembler was the only option, but modern languages allow you to code at a very high level of abstraction. Therefore I don't really see the need for modeling any more. What arguments for modeling software systems am I missing? By the way, I do believe that diagrams are a great way to document and communicate certain aspects of software design but that does not mean we should base software design on them. Clarification: The question has been put on hold as being unclear. Therefore let me add some explanation: I am asking if it makes sense to use (non-code) documents that model the software as the primary source of truth about software design. I do not have the case in mind where a significant portion of the code is automatically generated from these documents. If this was the case, I would consider the documents themselves as source code and not as a model. I listed some disadvantages of this procedure that make me wonder why so many people (in my experience) consider it as the preferable way of doing software design.
The benefit of modeling software systems vs. all in code is: I can fit the model on a whiteboard. I'm a big believer in the magic of communicating on one sheet of paper. If I tried to put code on the whiteboard, when teaching our system to new coders, there simply isn't any code at the needed level of abstraction that fits on a whiteboard. I know the obsession with modeling that you're referring to. People doing things because that's how they've been done before, without thinking about why they're doing it. I've come to call it formalism. I prefer to work informally because it's harder to hide silliness behind tradition. That doesn't mean I won't whip out a UML sketch now and then. But I'll never be the guy demanding you turn in a UML document before you can code. I might require that you take 5 minutes and find SOME way to explain what you're doing because I can't stand the existence of code that only one person understands. Fowler identified different ways people use UML that he called UML modes . The dangerous thing with all of them is that they can be used to hide from doing useful work. If you're doing it to code using the mouse, well I've seen many try. Haven't seen anyone make that really work. If you're doing it to communicate you'd better make sure others understand you. If you're doing it to design you damn well better be finding and fixing problems as you work. If everything is going smoothly and most of your time is spent making the arrows look nice then knock it off and get back to work. Most importantly, don't produce diagrams that you expect to be valid more than a day. If you somehow can, you've failed. Because software is meant to be soft. Do not spend weeks getting the diagrams just right. Just tell me what's going on. If you have to, use a napkin. That said, I prefer coders who know their UML and their design patterns. They're easier to communicate with. So long as they know that producing diagrams is not a full time job.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/355733", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/217956/" ] }
355,823
We're building a web application for company, which administration existed only in Excel sheets so far. We're almost done by now, but recently I was assigned a task to import all their data from those sheets to our new system. The system is built in Java, but as this import is just one-time thing I decided to write the scripts in Python instead and import it directly with SQL queries. Here comes the problem. The new data models contains some new attributes, which aren't included in their existing data. In most cases, this isn't a problem, I just put a null where I can't find the information. But then I ran into a few attributes, which are booleans and cannot be NULL by default. First I tried to just allow null for those fields in our database, but my senior dev told me to not do it, as it would cause a issues in our system in the future. And now I'm not quite sure what to do. Obvious solution is to default every unknown boolean value to false, but I think that is wrong too, because I actually don't know, whether it is false. Example: Let's say you have a entity Car which has a hasRadio parameter. Now you need to import data to this data model, but in data there are only columns "Model" and "Color", nothing about it having or not having radio. What do you put in a "hasRadio" column, if it cannot be null by design? What is the best approach in this situation? Should we just tell the company to manually fill in the missing data? Or default it to false?
This is mainly a requirements analysis problem, and it has nothing to do with the fact the data in stake is "boolean". If you have to initialize tables in a database, or in any other kind of data storage, and you have incomplete input for some columns, you first need to find out what the users of the system or your customer think would be the right default value for those columns, and you need to find this out for every single attribute , there is no generally correct answer. This will typically lead to one of the following cases: there is a good default value for the specific column, users don't mind if the value is initially the same for all records, they can set the correct values easily afterwards when needed there is a rule how to determine the ideal default value from other information, so you can put this rule into code the users or your customer will extend the input data and provide the missing values (maybe manually), before it gets imported into the database there is no good default value for the specific column and/or any record, the data should be imported either, but the users want to know for which of the records the particular value is already initialized and for which not. So they can enter the value afterwards , and track for which records the value is already correctly set and for which not. The last case requires something like NULL to represent the uninitialized or unknown state, even for a boolean value, if your senior likes it or not. If there is some obscure technical reason which forbids the use of a NULL value for a specific column, you need to simulate the "unknown" state in a different way, either by introducing an additional boolean column (like hasRadioIsUnknown ), or by using a 3-valued enumeration instead of a boolean (like HasNoRadio=0 , HasRadio=1 , Unknown=2 ). But speak to your senior again, after you made a thorough requirements analysis, to make sure such a workaround is really necessary.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/355823", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/280828/" ] }
355,831
Say I have a class like this: public abstract class Product { public Guid id {get; set;} public string description {get; set;} } This class is populated from the database. I am writing a unit test to decide if two products are the same. How do I establish if two entities are equal: 1) Guid only - this is the primary key from the database so is unique. Not even sure that this member should be in my class 2) Description only - this is always unique 3) GUID and Description
This is mainly a requirements analysis problem, and it has nothing to do with the fact the data in stake is "boolean". If you have to initialize tables in a database, or in any other kind of data storage, and you have incomplete input for some columns, you first need to find out what the users of the system or your customer think would be the right default value for those columns, and you need to find this out for every single attribute , there is no generally correct answer. This will typically lead to one of the following cases: there is a good default value for the specific column, users don't mind if the value is initially the same for all records, they can set the correct values easily afterwards when needed there is a rule how to determine the ideal default value from other information, so you can put this rule into code the users or your customer will extend the input data and provide the missing values (maybe manually), before it gets imported into the database there is no good default value for the specific column and/or any record, the data should be imported either, but the users want to know for which of the records the particular value is already initialized and for which not. So they can enter the value afterwards , and track for which records the value is already correctly set and for which not. The last case requires something like NULL to represent the uninitialized or unknown state, even for a boolean value, if your senior likes it or not. If there is some obscure technical reason which forbids the use of a NULL value for a specific column, you need to simulate the "unknown" state in a different way, either by introducing an additional boolean column (like hasRadioIsUnknown ), or by using a 3-valued enumeration instead of a boolean (like HasNoRadio=0 , HasRadio=1 , Unknown=2 ). But speak to your senior again, after you made a thorough requirements analysis, to make sure such a workaround is really necessary.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/355831", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/65549/" ] }
356,070
To some of my tables I want to add "second_primary_key" which will be uuid or some random long key. I need it because for some tables I don't want to expose integers to my web application. That is, on a page "/invoices" I have a list of invoices and a link to "/invoices/:id" where :id is an integer. I don't want a user to know how many invoices in my system there, hence instead of "/invoices/123" I want to use its "second_primary_key" so that the url will be "/invoices/N_8Zk241vNa" The same goes for other tables where I want to hide real id. I wonder, is this a common practice? What's the best way to implement this? And what is this technique called after all, so that I do a search on it?
Having an "alternative primary key" is a well know concept in relational database modeling, it is called "alternate key", or sometimes also "secondary key". The set of "potential primary keys" is called "candidate keys". See https://beginnersbook.com/2015/04/alternate-key-in-dbms/ How you implement this is completely up to you, especially if you want to hide the total number of records. There is no "best way", you should check your requirements like allowed or useful character set, maximum length, if you want the IDs to be case-sensitive or not, if you want them to be readable on a printed invoice, if someone must be able to respell them on the phone without errors, and so on.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/356070", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/281184/" ] }
356,347
Sounds pretty basic, I know, but I recently had a colleague tell me that a method called startHttpServer is too complicated to understand because it only starts the server if it's not already running. I find I get into trouble when I respond with, "Seriously? I've been doing this for decades - it's a common pattern in programming." More often than I care to admit he comes back with some documented evidence that shows that the entire programming community is behind his point of view and I end up feeling sheepish. Question : Is there a documented design pattern behind the concept of a method that is a no-op if the action required is already in effect? Or, if not a pattern, does it have a name either? And if not, is there any reason to think it's too complicated to consider writing a method in this manner?
As NickWilliams has already said : the concept the OP describes is called idempotent (noun Idempotency ). It is indeed common practice, especially in high-level APIs. BUT: Rename the function. Instead of startHttpServer call it makeSureHttpServerIsRunning or ensureHttpServerIsRunning . When a function is called startHttpServer , readers expect it to start a HTTP server; when called ten times in a row, I'll have ten servers running. Your function doesn't do that most of the time. Additionally, the name with "start" suggests that if I want only one server running I'll have to keep track of whether the function has already been called or not. When a function is called makeSureHttpServerIsRunning , I assume it will do the necessary things to make sure that a HTTP server is running, most likely by checking if it is already running, and starting it otherwise. I also assume that the function makes sure the server is actually running (starting a server might involve some time where it is not quite running yet).
{ "source": [ "https://softwareengineering.stackexchange.com/questions/356347", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/281606/" ] }
356,415
Kotlin is known primarily as a drop-in replacement for Java, but it gets rid of a well-known Java construct: the static keyword. Instead, that class-level functionality is offered mainly by companion objects. What is wrong with static methods and fields that companion objects provide a better alternative to? I'm confused about the rationale, and couldn't find any explanation in the documentation.
Scala also replaces class level declarations with a 'Singleton' object. The main advantage of this is that everything is an object. In Java, static members are treated very differently than object members. This means that you can't do things like implementing an interface or putting your class 'instance' into a map or pass it as a parameter to a method that takes Object. Companion objects allow for these things. That's the advantage.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/356415", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/1446/" ] }
356,464
If a software/library has some support for the Windows platform they almost always name their directories and variables as win32 . This is most prevalent in C/C++ projects. Even the MinGW project's target triple uses win32 . Is there a reason for this? Why not use a proper name like Windows or Microsoft Windows? Is there a legal snag around the naming choice? This question is not about the API, but the naming convention in use. When a library supports other operating systems, they often use the proper names like linux , freebsd or whatever special support needed. But when it comes to Windows, it's often abbreviated as win32 which seems a bit odd compared to the rest.
Win32 is the customary name for the Windows API. This API specifies how applications can interface with the operating system. It is roughly comparable with the POSIX standard on Unix, but Win32 also covers GUIs and many other features. The Win32 API is not limited to 32-bit Windows installations. From the Windows Dev Center : The Windows application programming interface (API) lets you develop desktop and server applications that run successfully on all versions of Windows while taking advantage of the features and capabilities unique to each version. The Windows API can be used in all Windows-based desktop applications, and the same functions are generally supported on 32-bit and 64-bit Windows. Differences in the implementation of the programming elements depend on the capabilities of the underlying operating system. These differences are noted in the API documentation. Note This was formerly called the Win32 API. The name Windows API more accurately reflects its roots in 16-bit Windows and its support on 64-bit Windows. You do not have to use the Win32 API to develop for Windows. Alternatives are the .NET classes or the Windows RT interface. There technically is a Win64 variant. But it differs from Win32 mostly in the data model (the size of pointers). It is not a distinct set of APIs: The Win64 API environment is almost the same as the Win32 API environment—unlike the major shift from Win16 to Win32. The Win32 and Win64 APIs are now combined and called the Windows API. Using the Windows API, you can compile the same source code to run natively on either 32-bit Windows or 64-bit Windows. To port the application to 64-bit Windows, just recompile the code. The Windows header files are modified so that you can use them for both 32-bit and 64-bit code. ( source ) Because Win64 is not substantially different you will almost never see projects targeting win64 on a source-code level, though newer projects might target winapi instead of the traditional win32 . But for all practical purposes all these names refer to the same API.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/356464", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/281782/" ] }
356,548
I have refactored some code at work lately, and I thought I did a good job. I dropped 980 lines of code to 450 and halved the number of classes. When showing this to my colleagues some did not agree that this was an improvement. They said - "fewer lines of code is not necessarily better" I can see that there could be extreme cases where people write really long lines and/or put everything in a single method to save a few lines, but that is not what I did. The code is in my opinion well structured and simpler to comprehend/maintain due to it being half the size. I'm struggling to see why anyone would want to work with double the code that is required to get a job done, and I'm wondering if anyone feels the same as my colleagues and can make some good cases for having more code over less?
A thin person isn't necessarily healthier than an overweight person. A 980 lines children story is easier to read than a 450 lines physics thesis. There are many attributes that determine the quality of your code. Some are simply computed, like Cyclomatic Complexity , and Halstead Complexity . Others are more loosely defined, such as cohesion , readability, understandability, extendability, robustness, correctness, self-documentation, cleanliness, testability and many more. It could be, for example, that while you reduced the overall length of the code - you introduced additional unwarranted complexity and made the code more cryptic. Splitting a long piece of code into tiny methods could be as harmful as it could be beneficial . Ask your colleagues to provide you with specific feedback as to why they think your refactoring efforts produced an undesirable result.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/356548", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/281938/" ] }
356,709
I am very puzzled with the obsession that many people seem to have with using Microsoft frameworks. I have seen several tutorials and projects (both open and closed source) that seem to utilize all of the additional .NET framework libraries and code when they are not really needed. To be clear, I am not talking about classes in System or anything like that, but rather full blown frameworks like ASP.NET Identity (and former iterations), OWIN, Entity Framework, MVC framework, etc. On stuff that is 100+ pages, I see things like Entity Framework / ASP.NET Identity that make me cringe because of the overhead those libraries introduce into your code and database when it is just so much easier to create your own clean and flexible solutions that don't have bloat in them. I laugh at OWIN actually being used by any serious business application; why would you use a social media login to anything that is supposed to be even remotely secure? These frameworks can only be useful for small (i.e. 3-5 pages) applications that are not really important or critical to businesses. I can hardly think of major software companies or business using these frameworks in their enterprise software (probably just the MVC framework because of the Razor/view engine components). What is the point of using these frameworks when they just introduce bloat, complexity, and over engineered code into projects? I could be missing something, but if I takes a few days to write a full database ORM structure and login logic using default classes that is lighter, faster, and more flexible versus using Microsoft frameworks why not? It seems that things like Entity and Identity would just introduce complexity and over engineering into problems that very easily solved with some simple analysis, foresight, and design pattern implementation. Do any of you actually use these frameworks in your own projects or enterprise level projects? EDIT: I wanted to slightly clarify myself for any future readers. I am not saying "rewrite all of Entity/OWIN/Identity (or X framework)", but rather think if you really need all of the additional code / dependencies such frameworks introduce. If you have 100+ pages that all interact with X framework (whether it be Entity or whatever) think about how much dependency and coupling you have. If you use a 3rd party library (e.g. HTML/Markup controls, ORM structure, CSS/JS framework, etc.) you will know what I mean when you have to update to a major version change or a different vendor. Even the best decoupling abstraction cannot save you sometimes.
When you start a project and have a particular need, you have a choice: Either you implement your own solution from scratch, Or you use an existent library or framework. When implementing your own solution, you introduce several risks: The needs may evolve, requiring you to constantly write more and more code. Ultimately, the code you've originally written was never expected to be used in a specific way, and needs either a lot of refactoring or a plain rewrite. Popular libraries and frameworks are designed in a way to cover much more needs than you have when you just start a project, which makes them look bloated at the beginning. However, as the requirements change, those libraries show their use by making you adapt their usage with ease, when they are designed well (and many Microsoft's frameworks are designed well). Any new person who joins the project won't be familiar with your implementation. This makes it unnecessarily complex for the newcomers to start working on your code. With popular libraries and frameworks, you don't have this problem. Either the new developer already knows the technology, or she doesn't, in which case, she has at her disposition a lot of in-depth, well-written documentation, a large set of Q&A on StackOverflow, training videos, etc. Developers themselves are usually more inclined to learn a popular technology than to spend a month trying to grasp YourMagnificentORM which, being proprietary, undocumented and poorly written, wouldn't give them any benefit when mentioned on a CV despite all its obvious qualities. When using an existent library or framework, you introduce different risks: The needs may evolve, and the library may not fit the new needs. This is especially important when the library is proprietary or when you're not inclined to contribute to an open source library. The library/framework itself may lose its popularity (example: Silverlight) or the developers of the library may decide to take a course that doesn't fit your needs. Both of those risks can be mitigated by a proper architecture where third-party code is abstracted from your business code. seem to utilize all of the additional .NET framework libraries and code when they are not really needed [...] I am [...] talking about [...] full blown frameworks like ASP.NET Identity (and former iterations), OWIN, Entity Framework, MVC framework, etc. What do you suggest to use instead? If you want to implement OAuth authentication, you might re-implement OAuth protocol, although I'm not sure that spending a few months on that will really benefit your employer. Or you may just use the high quality code Microsoft developers provide you for free, and spend your time implementing actual features. Owin gives you the ability to run your application outside IIS, given that the cost of using Owin in terms of code complexity is close to zero. If you are absolutely sure you will never ever host your application on anything other than IIS, and your team is unfamiliar with Owin and is not willing to learn it, there is indeed no reason to use it. Few teams are in this situation. Entity Framework gives the ability to use a database to developers who are unfamiliar with SQL and databases in general. If you do have a skillful DBA in your team and if every member of the team has no issue writing SQL queries by hand, you don't need Entity Framework. Few teams are in this situation. ASP.NET MVC makes it possible to structure your code using MVC architecture. Most developers find it superior to the architecture used by ASP.NET, leading to less code, less coupling, and proper abstractions. You get all those benefits for free (in terms of runtime performance), so the only reason not to do it is when you have a team of developers who absolutely love ASP.NET or when you need to maintain a legacy project which uses ASP.NET. I laugh at OWIN actually being used by any serious business application; why would you use a social media login to anything that is supposed to be even remotely secure? These frameworks can only be useful for small (i.e. 3-5 pages) applications that are not really important or critical to businesses. Owin has nothing to do with OAuth login. Aside that, StackExchange family uses OAuth authentication, and it also uses ASP.NET MVC. Do you characterize those sites as “small 3-5 pages applications that are not really important or critical to businesses”? What is the point of using these frameworks when they just introduce bloat, complexity, and over engineered code into projects? The goal is exactly that: to reduce bloat, complexity and over-engineered code, by ensuring all the complex stuff is within those tested, reviewed libraries, and not your project. This is the code you don't have to read, maintain, document, review and test. I could be missing something, but if I takes a few days to write a full database ORM structure and login logic using default classes that is lighter, faster, and more flexible versus using Microsoft frameworks why not? Even properly documenting a basic ORM requires at least few months. A weekend ORM project may be fun to write at home, but I would hope you won't decide to use one in the projects which are “really important and critical to businesses.” Do any of you actually use these frameworks in your own projects or enterprise level projects? I work for one of the three largest European banks. We use OAuth to consistently provide SSO for every product. We use Owin, because most of the projects are hosted outside IIS. There are a lot of other Microsoft's libraries being used, which allows us to keep the code clean and maintainable, and focus on the features, instead of technical stuff such as the internals of WebSockets, REST or WebHooks. By spending a few hours or days learning how to use an existent library, we save a lot of time and money if we were writing all this stuff from scratch, and then constantly maintaining it. This is how businesses work. You are, however, absolutely right when you talk about the feeling that simple projects are bloated. It is so easy to add a new package to a project, that many small projects tend to include the libraries they don't even use or at least don't need at their scale. Take CSS and JavaScript bundling and minification. For a starter project, there is little use to have one, but many tutorials present it as a default choice, transforming the whole system from opt-in to opt-out. While those libraries rarely present a performance impact, they do however make the code more complex than it has to be. The same problem, by the way, exists in other communities; for instance, many Node.js apps I've seen tend to have way too much dependencies. In .NET world, the culprit is not only the ease of adding libraries to a project, but also the fact that Microsoft promotes its ecosystem as something where you get "all the features you need," even if you don't necessarily need them.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/356709", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/184167/" ] }
356,772
Most functional languages use linked lists as their primary immutable data structure. Why lists, and not e.g. trees? Trees can also reuse paths, and even model lists.
Because lists are simpler than trees. (You can see this trivially by the fact that a list is a degenerate tree, where every node has only a single child.) The cons list is the simplest possible recursive data structure of arbitrary size. Guy Steele argued during the design of the Fortress programming language that for the massively parallel computations of the future, both our data structures and our control flow should be tree-shaped with multiple branches, not linear as they are now. But for the time being, most of our core data structure libraries were designed with sequential, iterative processing (or tail recursion, it doesn't really matter, they are the same thing) in mind, not parallel processing. Note that e.g. in Clojure, whose data structures were designed specifically for the parallel, distributed, "cloudy" world of today, even arrays (called vectors in Clojure), probably the most "linear" data structure of them all, are actually implemented as trees. So, in short: a cons list is the simplest possible persistent recursive data structure, and there was no need to choose a more complicated "default". Others are of course available as options, e.g. Haskell has arrays, priority queues, maps, heaps, treaps, tries, and everything you could possibly imagine, but the default is the simple cons list.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/356772", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/128997/" ] }
356,809
I work in a project that uses (Spring) Dependency Injection for literally everything that is a dependency of a class. We are at a point where the Spring configuration file has grown to about 4000 lines. Not long ago I watched one of Uncle Bob’s talks on YouTube (unfortunately, I could not find the link) in which he recommends injecting only a couple of central dependencies (e.g. factories, database, …) into the Main component from which they will then be distributed. The advantages of this approach is the decoupling of the DI framework from the most part of the application and also it makes the Spring config cleaner as the factories will contain a lot more of what was in the config before. On the contrary, this will result in spreading creation logic among many factory classes and testing might become more difficult. So my question really is what other advantages or disadvantages you see in the one or the other approach. Are there any best practices? Thanks a lot for your answers!
As always, It Depends™. The answer depends on the problem one is trying to solve. In this answer, I'll try to address some common motivating forces: Favour smaller code bases If you have 4,000 lines of Spring configuration code, I suppose that the code base has thousands of classes. It's hardly an issue that you can address after the fact, but as a rule of thumb, I tend to prefer smaller applications, with smaller code bases. If you're into Domain-Driven Design , you could, for instance, make a code base per bounded context. I'm basing this advice on my limited experience, since I've written web-based line-of-business code for most of my career. I could imagine that if you're developing a desktop application, or an embedded system, or other, that things are harder to pull apart. While I do realise that this first advice is easily the least practical, I also believe that it's the most important, and that's why I include it. Complexity of code varies non-linearly (possibly exponentially) with the size of the code base. Favour Pure DI While I still realise that this question presents an existing situation, I recommend Pure DI . Don't use a DI Container, but if you do, at least use it to implement convention-based composition . I don't have any practical experience with Spring, but I'm assuming that by configuration file , an XML file is implied. Configuring dependencies using XML is the worst of both worlds. First, you lose compile-time type safety, but you don't gain anything. An XML configuration file can easily be as big as the code it tries to replace. Compared to the problem it purports to address, dependency injection configuration files occupy the wrong place on the configuration complexity clock . The case for coarse-grained dependency injection I can make a case for coarse-grained dependency injection. I can also make a case for fine-grained dependency injection (see next section). If you only inject a few 'central' dependencies, then most classes might look like this: public class Foo { private readonly Bar bar; public Foo() { this.bar = new Bar(); } // Members go here... } This is still fits Design Patterns 's favor object composition over class inheritance , because Foo composes Bar . From a maintainability perspective, this could still be considered maintainable, because if you need to change the composition, you simply edit the source code for Foo . This is hardly less maintainable than dependency injection. In fact, I'd say that it's easier to directly edit the class that uses Bar , instead of having to follow the indirection inherent with dependency injection. In the first edition of my book on Dependency Injection , I make the distinction between volatile and stable dependencies. Volatile dependencies are those dependencies that you should consider injecting. They include Dependencies that must be re-configurable after compilation Dependencies developed in parallel by another team Dependencies with non-deterministic behaviour, or behaviour with side-effects Stable dependencies, on the other hand, are dependencies that behave in well defined manner. In a sense, you could argue that this distinction makes the case for coarse-grained dependency injection, although I must admit that I didn't entirely realise that when I wrote the book. From a testing perspective, however, this makes unit testing harder. You can no longer unit test Foo independent of Bar . As J.B. Rainsberger explains , integration tests suffer from a combinatoric explosion of complexity. You'll literally have to write tens of thousands of test cases if you want to cover all paths through an integration of even 4-5 classes. The counter-argument to that is that often, your task isn't to program a class. Your task is to develop a system that solves some specific problems. This is the motivation behind Behaviour-Driven Development (BDD). Another view on this is presented by DHH, who claims that TDD leads to test-induced design damage . He also favours coarse-grained integration testing. If you take this perspective on software development, then coarse-grained dependency injection makes sense. The case for fine-grained dependency injection Fine-grained dependency injection, on the other hand, could be described as inject all the things! My main concern regarding coarse-grained dependency injection is the criticism expressed by J.B. Rainsberger. You can't cover all code paths with integration tests, because you need to write literally thousands, or tens of thousands, of test cases to cover all code paths. The proponents of BDD will counter with the argument that you don't need to cover all code paths with tests. You only need to cover those that produce business value. In my experience, however, all the 'exotic' code paths will also execute in a high-volume deployment, and if not tested, many of those will have defects and cause run-time exceptions (often null-reference exceptions). This has caused me to favour fine-grained dependency injection, because it enables me to test the invariants of all objects in isolation. Favour functional programming While I lean towards fine-grained dependency injection, I've shifted my emphasis towards functional programming, among other reasons because it's intrinsically testable . The more you move towards SOLID code, the more functional it becomes . Sooner or later, you may as well take the plunge. Functional architecture is Ports and Adapters architecture , and dependency injection is also an attempt and Ports and Adapters . The difference, however, is that a language like Haskell enforces that architecture via its type system. Favour statically typed functional programming At this point, I've essentially given up on object-oriented programming (OOP), although many of the problems of OOP are intrinsically coupled to mainstream languages like Java and C# more than the concept itself. The problem with mainstream OOP languages is that it's close to impossible to avoid the combinatoric explosion problem, which, untested, leads to run-time exceptions. Statically typed languages like Haskell and F#, on the other hand, enable you to encode many decision points in the type system. This means that instead of having to write thousands of tests, the compiler will simply tell you whether you've dealt with all possible code paths (to an extent; it's no silver bullet). Also, dependency injection isn't functional . True functional programming must reject the entire notion of dependencies . The result is simpler code. Summary If forced to work with C#, I prefer fine-grained dependency injection because it enables me to cover the entire code base with a manageable number of test cases. In the end, my motivation is rapid feedback. Still, unit testing isn't the only way to get feedback .
{ "source": [ "https://softwareengineering.stackexchange.com/questions/356809", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/282310/" ] }
356,887
Our documentation team of about ten people recently moved from SVN to Git. In SVN, everybody worked on master -- a model I've always hated, but I wasn't able to bring about that change. As part of the move to Git we've agreed to fix that, but we can't do it just yet (waiting on build changes that will allow builds from arbitrary branches). Meanwhile, everybody is working on master. Yes I know this is terrible, believe me. We're seeing a lot more hiccups now than when we were using SVN, some of which are caused by Git's two-stage model (local and remote). Sometimes people commit but fail to push, or they pull and get conflicts with their pending local changes. Yesterday somebody clobbered recent changes -- somehow -- with a merge gone wrong, which I think was the merge that Git does when you pull and have outstanding changes. (He has not been able to tell me exactly what he did, and because he's using a GUI I can't just inspect his shell history.) As the most-proficient Git user (read: I've used it before, though not for anything super-complicated), I'm the person setting policy, teaching the tools, and cleaning up messes. What changes can I make to how we are using the tools to make a shared, active master less error-prone until we can switch to doing development on branches? The team is using Tortoise Git on Windows. We're using Tortoise Git because we used Tortoise SVN before. ( I personally use the command line under Cygwin for some operations, but the team has made it clear they need a GUI and we're going with this one.) Answers should work with this tool, not propose replacements. Tortoise Git has "Commit & Push" available as a single operation and I've told them to always do that. However, it's not atomic -- it can happen that the commit (which after all is local) works just fine but the push doesn't (say, due to a conflict, or a network issue). When that happens they get an ambiguous error; I've told them to check the BitBucket commit log if they have any doubts about a recent commit and, if they don't see it, to push. (And to resolve the conflict if that's the problem, or ask for help if they don't know what to do.) The team already has the good habit of "pull early and often". However, it appears that pull can cause conflicts, which I think is new? If not new, much more frequent than in SVN. I've heard that I can change how Git does pulls (rebase instead of merge), but I don't have a good understanding of the trade-offs there (or how to do it in our environment). The server is BitBucket (not Github). I have full administrative control over our repository but none on the server more generally. None of that is changeable. The source files are XML. There are also graphics files, which everybody knows you can't merge, but we also almost never have collisions there. The merge conflicts come from the XML files, not the graphics. What changes can I make to our use of Git to make sharing master go more smoothly for the team until we can move to using feature branches with reviewed, test-validated pull requests?
There are three main things to remember when you are working out of the same branch as someone else: Never use --force unless you really know what you are doing. Either commit or stash your work in progress before every pull . It usually goes easier if you pull right before a push . Aside from that, I will point out that with distributed version control it doesn't matter if your "official" repo uses branches or not. That has no bearing whatsoever on what individual users do in their local repos. I used to use git to get local branches when my company used a completely different central VCS. If they create local branches for their features and make merging mistakes to their local master , it's a lot easier to fix without going into the reflog or some other magic.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/356887", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/124448/" ] }
357,052
The Clean Architecture suggests to let a use case interactor call the actual implementation of the presenter (which is injected, following the DIP) to handle the response/display. However, I see people implementing this architecture, returning the output data from the interactor, and then let the controller (in the adapter layer) decide how to handle it. Is the second solution leaking application responsibilities out of the application layer, in addition to not clearly defining input and output ports to the interactor? Input and output ports Considering the Clean Architecture definition, and especially the little flow diagram describing relationships between a controller, a use case interactor, and a presenter, I'm not sure if I correctly understand what the "Use Case Output Port" should be. Clean architecture, like hexagonal architecture, distinguishes between primary ports (methods) and secondary ports (interfaces to be implemented by adapters). Following the communication flow, I expect the "Use Case Input Port" to be a primary port (thus, just a method), and the "Use Case Output Port" an interface to be implemented, perhaps a constructor argument taking the actual adapter, so that the interactor can use it. Code example To make a code example, this could be the controller code: Presenter presenter = new Presenter(); Repository repository = new Repository(); UseCase useCase = new UseCase(presenter, repository); useCase->doSomething(); The presenter interface: // Use Case Output Port interface Presenter { public void present(Data data); } Finally, the interactor itself: class UseCase { private Repository repository; private Presenter presenter; public UseCase(Repository repository, Presenter presenter) { this.repository = repository; this.presenter = presenter; } // Use Case Input Port public void doSomething() { Data data = this.repository.getData(); this.presenter.present(data); } } On the interactor calling the presenter The previous interpretation seems to be confirmed by the aforementioned diagram itself, where the relation between the controller and the input port is represented by a solid arrow with a "sharp" head (UML for "association", meaning "has a", where the controller "has a" use case), while the relation between the presenter and the output port is represented by a solid arrow with a "white" head (UML for "inheritance", which is not the one for "implementation", but probably that's the meaning anyway). Furthermore, in this answer to another question , Robert Martin describes exactly a use case where the interactor calls the presenter upon a read request: Clicking on the map causes either the placePinController to be invoked. It gathers the location of the click, and any other contextual data, constructs a placePinRequest data structure and passes it to the PlacePinInteractor which checks the location of the pin, validates it if necessary, create a Place entity to record the pin, constructs a EditPlaceReponse object and passes it to the EditPlacePresenter which brings up the place editor screen. To make this play well with MVC, I could think that the application logic that traditionally would go into the controller, here is moved to the interactor, because we don't want any application logic to leak outside the application layer. The controller in the adapters layer would just call the interactor, and maybe do some minor data format conversion in the process: The software in this layer is a set of adapters that convert data from the format most convenient for the use cases and entities, to the format most convenient for some external agency such as the Database or the Web. from the original article, talking about Interface Adapters. On the interactor returning data However, my problem with this approach is that the use case must take care of the presentation itself. Now, I see that the purpose of the Presenter interface is to be abstract enough to represent several different types of presenters (GUI, Web, CLI, etc.), and that it really just means "output", which is something a use case might very well have, but still I'm not totally confident with it. Now, looking around the Web for applications of the clean architecture, I seem to only find people interpreting the output port as a method returning some DTO. This would be something like: Repository repository = new Repository(); UseCase useCase = new UseCase(repository); Data data = useCase.getData(); Presenter presenter = new Presenter(); presenter.present(data); // I'm omitting the changes to the classes, which are fairly obvious This is attractive because we're moving the responsibility of "calling" the presentation out of the use case, so the use case doesn't concern itself with knowing what to do with the data anymore, rather just with providing the data. Also, in this case we're still not breaking the dependency rule, because the use case still doesn't know anything about the outer layer. However, the use case doesn't control the moment when the actual presentation is performed anymore (which may be useful, for example to do additional stuff at that point, like logging, or to abort it altogether if necessary). Also, notice that we lost the Use Case Input Port, because now the controller is only using the getData() method (which is our new output port). Furthermore, it looks to me that we're breaking the "tell, don't ask" principle here, because we're asking the interactor for some data to do something with it, rather than telling it to do the actual thing in the first place. To the point So, is any of these two alternatives the "correct" interpretation of the Use Case Output Port according to the Clean Architecture? Are they both viable?
The Clean Architecture suggests to let a use case interactor call the actual implementation of the presenter (which is injected, following the DIP) to handle the response/display. However, I see people implementing this architecture, returning the output data from the interactor, and then let the controller (in the adapter layer) decide how to handle it. That's certainly not Clean , Onion , or Hexagonal Architecture. That is this : Not that MVC has to be done that way You can use many different ways to communicate between modules and call it MVC . Telling me something uses MVC doesn't really tell me how the components communicate. That isn't standardized. All it tells me is that there are at least three components focused on their three responsibilities. Some of those ways have been given different names : And every one of those can justifiably be called MVC. Anyway, none of those really capture what the buzzword architectures (Clean, Onion, and Hex) are all asking you to do. Add the data structures being flung around (and flip it upside down for some reason) and you get : One thing that should be clear here is that the response model does not go marching through the controller. If you are eagle eye'd, you might have noticed that only the buzzword architectures completely avoid circular dependencies . That means the impact of a code change won't spread by cycling through components. The change will stop when it hits code that doesn't care about it. Wonder if they turned it upside down so that the flow of control would go through clockwise. More on that, and these "white" arrow heads, later. Is the second solution leaking application responsibilities out of the application layer, in addition to not clearly defining input and output ports to the interactor? Since communication from Controller to Presenter is meant to go through the application "layer" then yes making the Controller do part of the Presenters job is likely a leak. This is my chief criticism of VIPER architecture . Why separating these is so important could probably be best understood by studying Command Query Responsibility Segregation . #Input and output ports Considering the Clean Architecture definition, and especially the little flow diagram describing relationships between a controller, a use case interactor, and a presenter, I'm not sure if I correctly understand what the "Use Case Output Port" should be. It's the API that you send output through, for this particular use case. It's no more than that. The interactor for this use case doesn't need to know, nor want to know, if output is going to a GUI, a CLI, a log, or an audio speaker. All the interactor needs to know is the very simplest API possible that will let it report the results of it's work. Clean architecture, like hexagonal architecture, distinguishes between primary ports (methods) and secondary ports (interfaces to be implemented by adapters). Following the communication flow, I expect the "Use Case Input Port" to be a primary port (thus, just a method), and the "Use Case Output Port" an interface to be implemented, perhaps a constructor argument taking the actual adapter, so that the interactor can use it. The reason the output port is different from the input port is that it must not be OWNED by the layer that it abstracts. That is, the layer that it abstracts must not be allowed to dictate changes to it. Only the application layer and it's author should decide that the output port can change. This is in contrast to the input port which is owned by the layer it abstracts. Only the application layer author should decide if it's input port should change. Following these rules preserves the idea that the application layer, or any inner layer, does not know anything at all about the outer layers. #On the interactor calling the presenter The previous interpretation seems to be confirmed by the aforementioned diagram itself, where the relation between the controller and the input port is represented by a solid arrow with a "sharp" head (UML for "association", meaning "has a", where the controller "has a" use case), while the relation between the presenter and the output port is represented by a solid arrow with a "white" head (UML for "inheritance", which is not the one for "implementation", but probably that's the meaning anyway). The important thing about that "white" arrow is that it lets you do this: You can let the flow of control go in the opposite direction of dependency! That means the inner layer doesn't have to know about the outer layer and yet you can dive into the inner layer and come back out! Doing that has nothing to do with using the "interface" keyword. You could do this with an abstract class. Heck you could do it with a (ick) concrete class so long as it can be extended. It's simply nice to do it with something that focuses only on defining the API that Presenter must implement. The open arrow is only asking for polymorphism. What kind is up to you. Why reversing the direction of that dependency is so important can be learned by studying the Dependency Inversion Principle . I mapped that principle onto these diagrams here . #On the interactor returning data However, my problem with this approach is that the use case must take care of the presentation itself. Now, I see that the purpose of the Presenter interface is to be abstract enough to represent several different types of presenters (GUI, Web, CLI, etc.), and that it really just means "output", which is something a use case might very well have, but still I'm not totally confident with it. No that's really it. The point of making sure the inner layers don't know about the outer layers is that we can remove, replace, or refactor the outer layers confident that doing so wont break anything in the inner layers. What they don't know about won't hurt them. If we can do that we can change the outer ones to whatever we want. Now, looking around the Web for applications of the clean architecture, I seem to only find people interpreting the output port as a method returning some DTO. This would be something like: Repository repository = new Repository(); UseCase useCase = new UseCase(repository); Data data = useCase.getData(); Presenter presenter = new Presenter(); presenter.present(data); // I'm omitting the changes to the classes, which are fairly obvious This is attractive because we're moving the responsibility of "calling" the presentation out of the use case, so the use case doesn't concern itself with knowing what to do with the data anymore, rather just with providing the data. Also, in this case we're still not breaking the dependency rule, because the use case still doesn't know anything about the outer layer. The problem here is now whatever knows how to ask for the data has to also be the thing that accepts the data. Before the Controller could call the Usecase Interactor blissfully unaware of what the Response Model would look like, where it should go, and, heh, how to present it. Again, please study Command Query Responsibility Segregation to see why that's important. However, the use case doesn't control the moment when the actual presentation is performed anymore (which may be useful, for example to do additional stuff at that point, like logging, or to abort it altogether if necessary). Also, notice that we lost the Use Case Input Port, because now the controller is only using the getData() method (which is our new output port). Furthermore, it looks to me that we're breaking the "tell, don't ask" principle here, because we're asking the interactor for some data to do something with it, rather than telling it to do the actual thing in the first place. Yes! Telling, not asking, will help keep this object oriented rather than procedural. #To the point So, is any of these two alternatives the "correct" interpretation of the Use Case Output Port according to the Clean Architecture? Are they both viable? Anything that works is viable. But I wouldn't say that the second option you presented faithfully follows Clean Architecture. It might be something that works. But it's not what Clean Architecture asks for.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/357052", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/122238/" ] }
357,084
I am a religious person and make efforts not to commit sins. That is why I tend to write small ( smaller than that , to reword Robert C. Martin) functions to comply with the several commandments ordered by the Clean Code bible. But while checking some stuff, I landed on this post , below which I read this comment: Remember that the cost of a method call can be significant, depending on the language. There's almost always a tradeoff between writing readable code and writing performant code. Under what conditions is this quoted statement still valid nowadays given the rich industry of performant modern compilers? That is my only question. And it is not about whether I should write long or small functions. I just highlight that your feedback may -or not- contribute to altering my attitude and leave me unable to resist the temptation of blasphemers .
It depends on your domain. If you are writing code for low-power microcontroller, then method call cost might be significant. But if you are creating normal website or application, then method call cost will be negligible compared to rest of the code. In that case, it will always be more worth focusing on right algorithms and data structures instead of micro-optimizations like method calls. And there is also question of compiler inlining the methods for you. Most compilers are intelligent enough to inline functions where it is possible. And last, there is golden rule of performance : ALWAYS PROFILE FIRST. Don't write "optimized" code based on assumptions. If you are unusure, write both cases and see which is better.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/357084", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/214828/" ] }
357,163
I am currently in the process of trying to master C#, so I am reading Adaptive Code via C# by Gary McLean Hall . He writes about patterns and anti-patterns. In the implementations versus interfaces part he writes the following: Developers who are new to the concept of programming to interfaces often have difficulty letting go of what is behind the interface. At compile time, any client of an interface should have no idea which implementation of the interface it is using. Such knowledge can lead to incorrect assumptions that couple the client to a specific implementation of the interface. Imagine the common example in which a class needs to save a record in persistent storage. To do so, it rightly delegates to an interface, which hides the details of the persistent storage mechanism used. However, it would not be right to make any assumptions about which implementation of the interface is being used at run time. For example, casting the interface reference to any implementation is always a bad idea. It might be the language barrier, or my lack of experience, but I don't quite understand what that means. Here is what I understand: I have a free time fun project to practice C#. There I have a class: public class SomeClass... This class is used in a lot of places. While learning C#, I read that it is better to abstract with an interface, so I made the following public interface ISomeClass <- Here I made a "contract" of all the public methods and properties SomeClass needs to have. public class SomeClass : ISomeClass <- Same as before. All implementation here. So I went into all some class references and replaced them with ISomeClass. Except in the construction, where I wrote: ISomeClass myClass = new SomeClass(); Am I understanding correctly that this is wrong? If yes, why so, and what should I do instead?
Abstracting your class into an interface is something you should consider if and only if you intend on writing other implementations of said interface or the strong possibility of doing so in the future exists. So perhaps SomeClass and ISomeClass is a bad example, because it would be like having a OracleObjectSerializer class and a IOracleObjectSerializer interface. A more accurate example would be something like OracleObjectSerializer and a IObjectSerializer . The only place in your program where you care what implementation to use is when the instance is created. Sometimes this is further decoupled by using a factory pattern. Everywhere else in your program should use IObjectSerializer not caring how it works. Lets suppose for a second now that you also have a SQLServerObjectSerializer implementation in addition to OracleObjectSerializer . Now suppose you need to set some special property to set and that method is only present in OracleObjectSerializer and not SQLServerObjectSerializer. There are two ways to go about it: the incorrect way and the Liskov substitution principle approach. The incorrect way The incorrect way, and the very instance referred to in your book, would be to take an instance of IObjectSerializer and cast it to OracleObjectSerializer and then call the method setProperty available only on OracleObjectSerializer . This is bad because even though you may know an instance to be an OracleObjectSerializer , you're introducing yet another point in your program where you care to know what implementation it is. When that implementation changes, and presumably it will sooner or later if you have multiple implementations, best case scenario, you will need to find all these places and make the correct adjustments. Worst case scenario, you cast a IObjectSerializer instance to a OracleObjectSerializer and you receive a runtime failure in production. Liskov Substitution Principle approach Liskov said that you should never need methods like setProperty in the implementation class as in the case of my OracleObjectSerializer if done properly. If you abstract a class OracleObjectSerializer to IObjectSerializer , you should encompass all the methods necessary to use that class, and if you can't, then something is wrong with your abstraction (trying to make a Dog class work as a IPerson implementation for instance). The correct approach would be to provide a setProperty method to IObjectSerializer . Similar methods in SQLServerObjectSerializer would ideally work through this setProperty method. Better still, you standardize property names through an Enum where each implementation translates that enum to the equivalent for its own database terminology. Put simply, using an ISomeClass is only half of it. You should never need to cast it outside the method that is responsible for its creation. To do so is almost certainly a serious design mistake.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/357163", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/275973/" ] }
357,233
I noticed something strange after compiling this code on my machine: #include <stdio.h> int main() { printf("Hello, World!\n"); int a,b,c,d; int e,f,g; long int h; printf("The addresses are:\n %0x \n %0x \n %0x \n %0x \n %0x \n %0x \n %0x \n %0x", &a,&b,&c,&d,&e,&f,&g,&h); return 0; } The result is the following. Notice that between every int address there is a 4-byte difference. However between the last int and the long int there is a 12-byte difference: Hello, World! The addresses are: da54dcac da54dca8 da54dca4 da54dca0 da54dc9c da54dc98 da54dc94 da54dc88
It didn't take 12 bytes, it only took 8. However, the default alignment for an 8 byte long int on this platform is 8 bytes. As such, the compiler needed to move the long int to an address that's divisible by 8. The "obvious" address, da54dc8c, isn't divisible by 8 hence the 12 byte gap. You should be able to test this. If you add another int prior to the long, so there are 8 of them, you should find that the long int will be aligned ok without a move. Now it'll be only 8 bytes from the previous address. It's probably worth pointing out that, although this test should work, you shouldn't rely on the variables being organised this way. A C compiler is allowed to do all sorts of funky stuff to try to make your program run quickly including re-ordering variables (with some caveats).
{ "source": [ "https://softwareengineering.stackexchange.com/questions/357233", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/223002/" ] }
357,343
I'm a software intern and I am assigned bugs to fix as well as features to add to the software. When I add features, everything works well. My problem is more with fixing bugs. I'm working on an extremely large codebase (span of millions of lines) with poor documentation (There are no comments, tons of magic numbers, tight coupling between different classes, etc). The only documentation provided is a Word document of 8 or so pages. But I still feel like I can do a better job with bug fixing. I'll give an example of a situation I've encountered, which I feel like I could have done better in: I was assigned a bug to fix regarding extents calculations for a specific type of object (computer graphics) I found the issue with the bug, and why it was caused. Because the program populated a data structure (contiguous array) with memory representing a 3D cartesian point it seemingly should not have (Therefore this point would be used in extent calculations). The bug was indeed caused by this. HOWEVER, another piece of code (in a different class) used pointer arithmetic to get this point and use it for another feature, to make the mouse snap near this point when a certain feature in the software was enabled. However, since I removed the point, I fixed the bug I was assigned yet caused another bug in the software. What can I do to not have things like this occur? How can I improve? Is there some process I'm missing?
You cannot be solely responsible for these kinds of defects. You are human, and it is impossible to think about large systems as a whole. To help prevent "regression bugs" -- bugs that are unintentionally created when modifying the system, you can do the following: Develop a comprehensive suite of automated regression tests. Hopefully your system has a large suite of tests. Every time a bug is discovered in your system, you should write an automated test that reproduces the bug. Then fix the bug. If the system ever regresses, your "regression test" will break and notify the developer. Inspect the code. Whenever you fix a bug, it's a good idea to inspect the usages of the function you've changed to try to identify any breaking changes you've introduced Write more tests. If when inspecting the code, you identify code that is not covered by any automated tests, this is a good opportunity to add some. Familiarize yourself with the system, and work with software a long time. Practice makes perfect. Nobody expects a developer to not introduce bugs into a brand new system or when they're an intern. We've all done it.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/357343", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/121022/" ] }
357,589
I have heard from a former colleague that not all bugs need to be fixed, because as you go down the priority list of bugs, the use case which causes that bug becomes more obscure, or the customer satisfaction gained gets lower. But you still have to spend considerable time on fixing that bug. In an effort to convince our product owner about this concept, I could not find any good resources. All I could find was discussions on whether there is a marginal cost in software development or not. Is there really marginal benefit in fixing bugs? Is there a different term that explains this concept?
From a business perspective, a bug fix is no different than a feature request. It has a certain cost in development time, and it has a certain value for customers. If a bug is non-critical, it can totally make good business sense to prioritize a valuable feature above the bugfix. But from a technical perspective, bugs may be more critical, because they indicate an error in a foundation which other code might use/build on, in which case the error is "contagious" and adds cost to future maintenance. So not fixing a bug is a technical debt which require management, while not implementing a feature does not really have an ongoing cost. But the level of technical debt incurred by a bug very much depends on the nature of the bug. All these factors should be taken into consideration when prioritizing. As for whether there is a marginal benefit to fixing bugs: This is a given. Since not all bugs are equal in severity, you naturally prioritize the most important bugs first. So the more bugs you fix, the lower the marginal value of fixing the next. But whether it ever reaches the level were fixing the bug is not worth the effort, is a business decision rather than a technical decision.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/357589", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/282208/" ] }
357,745
I just started my first job as a software developer over a month ago. Everything I have learned about OOP, SOLID , DRY , YAGNI, design patterns, SRP , etc. can be thrown out the window. They use C# .NET Webforms and do almost everything within the Code Behind with very few external Classes, definitely not called objects. They do use custom controls and reuse them. About the only objects used is by Entity Framework . They reuse the Code Behinds for each client. They have methods which are 400 lines long doing all types of stuff. For new clients they take the aspx and the aspx.cs and strip out the client code and start adding the new client-specific code. Their first excuse would be it adds additional maintenance, and more code is more maintenance. It is a small shop of three developers including myself. One developer has over 30 years experience and the other has 20+ years experience. One used to be a game developer and the other has always worked in C and C++. How common is this within the software industry? How can I ensure that I stay on top of OOP and the related principles? I practice in my spare time, and I feel like I really need to work under a more experienced developer to get better at OOP.
The principles that you cited in your question are just that... principles. They are not mandates, laws or orders. While the people who came up with these principles are very smart, they are not absolute authorities. They are just people offering their insights and guidance. There is no "correct" way to program. This is evidenced by the fact that the "accepted" way we do it has changed, and continues to change, radically over time. Shipping a product can often take precedence over doing it the "right" way. This is such a prevalent practice that it has a name: Technical Debt. Some software architectures in common use are not ideal. Best practices are evolving away from large, monolithic applications towards loosely-coupled collections of modules. Context is important. Many architectural principles only prove their worth when you're working with large programs or specific domains. For example, inheritance is mostly useful for UI hierarchies and other structures that benefit from deeply-nested, tightly-coupled arrangements. So how do you follow a "right" path, a principled path, so that you can emerge from the wilderness? Study appropriateness, rather than correctness. The "right" way to do anything in software development is the one that most effectively meets your specific requirements. Study tradeoffs. Everything in software development is a tradeoff. Do you want more speed or less memory usage? Do you want a very expressive programming language with few practitioners, or a less expressive language that many developers know? Study timelessness. Some principles have stood the test of time and will always be relevant. Type systems are universal. First-class functions are universal. Data structures are universal. Learn pragmatism. Being practical is important. Mathematical purity, crystal-cathedral architectures and ivory-tower principles are useless if you cannot ship. Be a craftsman, not a zealot. The very best software developers are the ones who know the rules, and then know how to break them when it makes sense to do so. They're the ones who still know how solve problems and think for themselves. They use principles to inform and guide their choices, not dictate them. Write code. Lots of it. Software design principles are premature optimizations until you've written a lot of code and developed an instinct for how to apply them correctly.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/357745", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/234341/" ] }
357,900
What's a correct format of a geographical address/location which is a good fit for any address on the Earth? At the moment I have: country city street number text data (for simplicity) zip lat/lng But I believe I can improve it: there might be a state/region of a country or something like area. Or no area/region/state, say, in Singapore or Hong Kong. There might be no street, but road or boulevard or something else. A number of a building might be compound. There might be a floor. A room number. Etc....
Google has developed a library that helps validate postal addresses for every country in the world, which you can use to design a schema to store this data. Look for the most common required fields across addresses from your targeted customer base to get started, and as you identify further countries with different requirements you can continue to adjust your schema.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/357900", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/284035/" ] }
357,937
According to Pycharm (and thus assume according by PEP) i should state all instance attributes directly inside __init__ . In my case it does not seem suitable to me. I have something like: class Handler(object): def __init__(self): self.start_index = # parse # Do stuff def parse(self): parse_header() # Do stuff parse_body() def parse_header(): raise NotImplementedError def parse_body(): raise NotImplementedError class StringHandler(Handler): def parse_header(): self.string_index = # parse self.string_count = # parse self.style_index = # parse self.style_count = # parse # More here def parse_body(): self.strings = # parse self.styles = # parse # More here class IntHandler(Handler): def parse_header(): self.int_index = # parse self.int_count = # parse # More here def parse_body(): self.ints = # parse # More here Given that there are objects (quite a lot of them) that all derive from Handler class, the structure of each object it quite obvious for anyone who reads the code. To me it seems like unnecessary cluttering to declare __init__ just for sake of calling parent __init__ and declare couple of parameters as None . Usually parse -ing functions just parse whatever is needed and store that to attributes, thus adding such __init__ would sometimes almost double the size of the class (code-wise). edit: I searched for this problem, but I did not found suitable answer, for example while searching stackexcahnge, the questions were either too broad to get useful answer ( How would you know if you've written readable and easily maintainable code? ) or asking different question ( Is it a good practice to declare instance variables as None in a class in Python? )
Yes, you should assign all attributes of your object in the __init__ method. The most important reason to do this is tool support. Many Python IDEs can see the assignments in the __init__ and use those attributes for type-based autocompletion. Sphinx-doc can list those attributes in your reference documentation. There's also a more fundamental reason: Simple is good, so don't create complicated objects that change their shape during execution. Given a StringHandler instance I may or may not be able to read the handler.strings attribute. If I try that before handler.parse() has been executed, I will get an AttributeError . Afterwards I would get the value. To avoid exceptions, consumers of your class would have to getattr(handler, 'strings', None) with a default value which is quite tedious. In general, constructors should create fully-initialized objects that are immediately usable. In some cases it is necessary to create mutable objects that have an empty, default state, and later transition to a state where they contain a value. However, this should be the exception and is rarely a good design. See also Are init() methods a code smell? and Start Method vs. Setting up everything in constructor which discuss this issue for Java. In your case the parse() method is such a two-phase initialization method that must be called before the object can really be used. This raises the questions: What is the point of these objects, if they're just a container for whatever values the parse() function produces? As you are using them, you might as well use a dict instead of a class. Is an empty Handler meaningful in any way? Why is the initialization in parse() separate from the other initialization? Or are you using those attributes only for communication between the parse_x functions, not for external consumers? Possible alternative designs: The handlers specify how a field should be parsed, but do not actually contain the value. The parse method returns the value. Because the handlers do not contain any data, it may not be necessary to use classes for this – simple functions could are sufficient. The handlers represent a parsed value. There is no “empty” state. The parsing either happens in the constructor, or in separate functions. Because these objects are just dumb structs, you could define them via namedtuple() when parsing is separate.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/357937", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/284089/" ] }