source_id
int64
1
4.64M
question
stringlengths
0
28.4k
response
stringlengths
0
28.8k
metadata
dict
233,053
In Java 8, interfaces can contain implemented methods, static methods, and the so-called "default" methods (which the implementing classes do not need to override). In my (probably naive) view, there was no need to violate interfaces like this. Interfaces have always been a contract you must fulfill, and this is a very simple and pure concept. Now it is a mix of several things. In my opinion: static methods do not belong to interfaces. They belong to utility classes. "default" methods shouldn't have been allowed in interfaces at all. You could always use an abstract class for this purpose. In short: Before Java 8: You could use abstract and regular classes to provide static and default methods. The role of interfaces is clear. All the methods in an interface should be overriden by implementing classes. You can't add a new method in an interface without modifying all the implementations, but this is actually a good thing. After Java 8: There's virtually no difference between an interface and an abstract class (other than multiple inheritance). In fact you can emulate a regular class with an interface. When programming the implementations, programmers may forget to override the default methods. There is a compilation error if a class tries to implement two or more interfaces having a default method with the same signature. By adding a default method to an interface, every implementing class automatically inherits this behavior. Some of these classes might have not been designed with that new functionality in mind, and this can cause problems. For instance, if someone adds a new default method default void foo() to an interface Ix , then the class Cx implementing Ix and having a private foo method with the same signature does not compile. What are the main reasons for such major changes, and what new benefits (if any) do they add?
A good motivating example for default methods is in the Java standard library, where you now have list.sort(ordering); instead of Collections.sort(list, ordering); I don't think they could have done that otherwise without more than one identical implementation of List.sort .
{ "source": [ "https://softwareengineering.stackexchange.com/questions/233053", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/29765/" ] }
233,068
What is better and why? (From interface-design point of view) : a) To have two Show() and Hide() functions b) To have one SetVisible(bool visible) function EDIT: For example some object have visibility state and this functions is used to change it. c) To have all three Show() , Hide() , SetVisible(bool visible) functions
I prefer SetVisible(bool visible) , because it lets me write client code like this: SetVisible(DetermineIfItShouldBeVisible()); instead of having to write if (DetermineIfItShouldBeVisible()) { Show(); } else { Hide(); } The SetVisible approach may also allow for easier implementation. For example, if a particular concrete class simply delegates the method to its composite classes, then SetVisible means one less method to implement. void ButtonWithALabel::SetVisible(bool visible) { myButton.SetVisible(visible); myLabel.SetVisible(visible); }
{ "source": [ "https://softwareengineering.stackexchange.com/questions/233068", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/124070/" ] }
233,164
When designing a RESTful interface, the semantics of the request types are deemed vital to the design. GET - List collection or retrieve element PUT - Replace collection or element POST - Create collection or element DELETE - Well, erm, delete collection or element However, this doesn't seem to cover the concept of "search". E.g. in designing a suite of web services that support a Job Search site you might have the following requirements: Get individual Job Advert GET to domain/Job/{id}/ Create Job Advert POST to domain/Job/ Update Job Advert PUT to domain/Job/ Delete Job Advert DELETE to domain/Job/ "Get All Jobs" is also simple: GET to domain/Jobs/ However, how does the job "search" fall into this structure? You could claim it's a variation of "list collection" and implement as: GET to domain/Jobs/ However, searches can be complex and it's entirely possible to produce a search that generates a long GET string. That is, referencing a SO question here , there are issues using GET strings longer than about 2000 characters. An example might be in a faceted search - continuing the "job" example. I may allow for searching on facets - "Technology", "Job Title", "Discipline" as well as free-text keywords, age of job, location and salary. With a fluid user interface and a large number of technologies and job titles, it is feasible that a search could encompass a large number of facet choices. Tweak this example to CVs, rather than jobs, bring in even more facets, and you can very easily imagine a search with a hundred facets selected, or even just 40 facets each of which are 50 characters long (e.g. Job Titles, University Names, Employer Names). In that situation it might be desirable to move a PUT or POST in order to ensure that the search data will get correctly sent. E.g.: POST to domain/Jobs/ But semantically that's an instruction to create a collection. You could also say you'll express this as the creation of a search: POST to domain/Jobs/Search/ or (as suggested by burninggramma below) POST to domain/JobSearch/ Semantically it may seem to make sense, but you're not actually creating anything, you're making a request for data. So, semantically it's a GET , but GET isn't guaranteed to support what you need. So, the question is - Trying to keep as true to RESTful design as possible, whilst ensuring that I'm keeping within the limitations of HTTP, what is the most appropriate design for a search?
You should not forget that GET requests have some superior advantages over other solutions: 1) GET requests can be copied from the URL bar, they are digested by search engines, they are "friendly". Where "friendly" means that normally a GET request should not modify anything inside your application (idempotent) . This is the standard case for a search. 2) All of these concepts are very important not just from user and search engine, but from an architectural, API design standpoint. 3) If you create a workaround with POST/PUT you will have problems which you are not thinking of right now. For example in case of a browser the navigate back button / refresh page / history. These can be solved of course, but that's going to be another workaround, then another and another ... Considering all this my advice would be: a) You should be able to fit inside your GET with using clever parameter structure . In extreme case you can even go for tactics like this google search where I set a lot of parameters still its a super short url. b) Create another entity in your application like JobSearch . Assuming you got so much options, its probable that you will need to store these searches as well and manage them, so its just clearing up your application. You can work with the JobSearch objects as a whole entity, meaning you can test it / use it easier . Personally I would try to fight with all my claws to get it done with a) and when all hope is lost, I would crawl back with tears in my eyes to option b) .
{ "source": [ "https://softwareengineering.stackexchange.com/questions/233164", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/121154/" ] }
233,210
I have a C# class that represents a content type in a web content management system. We have a field that allows a web content editor to enter an HTML template for how the object is displayed. It basically uses the handlebars syntax for substituting object property values into the HTML string: <h1>{{Title}}</h1><p>{{Message}}</p> From a class design perspective, should I expose the formatted HTML string (with substitution) as a property or method ? Example as property: public class Example { private string _template; public string Title { get; set; } public string Message { get; set; } public string Html { get { return this.ToHtml(); } protected set { } } public Example(Content content) { this.Title = content.GetValue("title") as string; this.Message = content.GetValue("message") as string; _template = content.GetValue("template") as string; } private string ToHtml() { // Perform substitution and return formatted string. } } Example as method: public class Example { private string _template; public string Title { get; set; } public string Message { get; set; } public Example(Content content) { this.Title = content.GetValue("title") as string; this.Message = content.GetValue("message") as string; _template = content.GetValue("template") as string; } public string ToHtml() { // Perform substitution and return formatted string. } } I'm not sure from a design standpoint does it make a difference or are there reasons why one approach is better than the other?
UPDATE: This question was the subject of my blog in May 2014 . Thanks for the great question! To add to Robert Harvey's answer : a property should be: logically a property of the class, the way that say its color or year or model are the properties of a car. not more than, let's say, ten times slower to compute than fetching from a field. something you don't mind being computed while debugging. The VS debugger automatically computes properties. unable to fail. Getters should always return a value no matter what the state of the object is. I don't think your proposed Html property hits any of those. Don't make it a property unless it hits all of them.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/233210", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/80895/" ] }
233,217
I'm working with a Java project that has several interfaces, many of which have only one implementation. (See related question ) For a given revision of the software, one could think this acceptable because the interfaces were some plan for future extensions. However, when I look at the 200+ revisions over several years, those interfaces never had more than one implementation, (nor were they used in unit-testing). It seems that in these cases of probable bloat (YAGNI) owing to the Interface Anti-pattern , it would be useful to apply the inverse of Extract Interface . However, Inline Class (the supposed inverse according to refactoring.com) is not what I'm referring to. What's the name of the refactoring that removes an unused interface, substituting the sole class that implements it?
UPDATE: This question was the subject of my blog in May 2014 . Thanks for the great question! To add to Robert Harvey's answer : a property should be: logically a property of the class, the way that say its color or year or model are the properties of a car. not more than, let's say, ten times slower to compute than fetching from a field. something you don't mind being computed while debugging. The VS debugger automatically computes properties. unable to fail. Getters should always return a value no matter what the state of the object is. I don't think your proposed Html property hits any of those. Don't make it a property unless it hits all of them.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/233217", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/51948/" ] }
233,222
I am refactoring a huge legacy code class. Refactoring (I presume) advocates this: write tests for the legacy class refactor the heck out of the class Problem: once I refactor the class, my tests in step 1 will need to be changed. For example, what once was in a legacy method, now may be a separate class instead. What was one method may be now several methods. The entire landscape of the legacy class may be obliterated into something new, and so the tests I write in step 1 will almost be null and void. In essence I will be adding Step 3. rewrite my tests profusely What is the purpose then to write tests before refactor? It sounds more like an academic exercise of creating more work for myself. I am writing tests for the method now and I am learning more about how to test things and how the legacy method works. One can learn this by just reading the legacy code itself, but writing tests is almost like rubbing my nose in it, and also documenting this temporary knowledge in separate tests. So this way I almost have no choice but to learn what the code is doing. I said temporary here, because I will refactor the heck out of the code and all my documentation and tests will be null and void for a significant part, except my knowledge will stay and allow me to be fresher on the refactoring. Is that the real reason to write tests before refactor - to help me understand the code better? There's got to be another reason! Please explain! Note: There is this post: Does it make sense to write tests for legacy code when there is no time for a complete refactoring? but it say "write tests before refactor", but doesn't say "why", or what to do if "writing tests" seems like "busy work that will be destroyed soon"
Refactoring is cleaning up a piece of code (e.g. improving the style, design, or algorithms), without changing (externally visible) behavior. You write tests not to make sure that the code before and after refactoring is the same, instead you write tests as an indicator that your application before and after refactoring behaves the same: The new code is compatible, and no new bugs were introduced. Your primary concern should be to write unit tests for the public interface of your software. This interface should not change, so the tests (that are an automated check for this interface) shouldn't change either. However, tests are also useful to locate errors, so it can make sense to write tests for private parts of your software as well. These tests are expected to change throughout refactoring. If you want to change an implementation detail (like the naming of a private function), you first update the tests to reflect your changed expectations, then make sure that the test fails (your expectations are not fulfilled), then you change the actual code and check that all tests pass again. At no point should the tests for the public interface start to fail. This is more difficult when performing changes on a larger scale, e.g. redesigning multiple codependent parts. But there will be some kind of boundary, and at that boundary you'll be able to write tests.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/233222", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/119333/" ] }
233,482
I've spotted a nice WordPress (GPL) theme for sale. I know somebody who bought it. I have 2 questions: Has the company selling it the obligation to send the source code to whoever (customers or not) ask it? Can the person who bought it give me a copy for free which I could use in production?
The company selling it has no obligation to distribute source to anyone except people to whom they have given binaries. So no, they don't have to give you anything. Someone who has purchased GPL software does have the right to request source and subsequently redistribute that source to anyone under the terms of the GPL. If you can find a customer willing to give you a copy, that will work.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/233482", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/124438/" ] }
233,778
In Spanish-speaking countries we use more than one last name, like: First name ↘                           ↙ Last name Pedro Arturo Rodríguez Loyola Middle name ↗                               ↖ (?) I'm trying to model data for patient name. In our country it's important, so I can't neglect the second last name, but I would like to build an application that can makes sense to other developers, especially from English-speaking countries. I created a similar question on English.SE , it has some insights about why it's important to persist both values.
Q: How does a DBA count? A: 0, 1, many An individual has 1 or more given names and 1 or more family names, and possibly a title. These names have an order to them. It is up to the localization and culture to determine how to refer to an individual. ContactId NamePart {"John", "Smith", ... } NameType {title, given, family, ???} Order {1, 2, 3, ... } For Pedro Arturo Rodríguez Loyola (contact #1), you would have four rows: 1 / Pedro / given / 1 1 / Arturo / given / 2 1 / Rodríguez / family / 3 1 / Loyola / family / 4 This way it is not limited to any given structure yet still makes sense for a given contact on there. What do you do when you have someone with 3 or 4 given or family names? or a maiden name? Note that I've changed the order from a from previous revision of this answer - the order is an order over the entire name rather than just an order within the name type because in some cultures, the family name comes first, you may have split title parts "Sir John Smith II". Additional Reading Falsehoods programmers believe about names Two Last Names
{ "source": [ "https://softwareengineering.stackexchange.com/questions/233778", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/66929/" ] }
233,788
A while ago, I asked a question on SO about something written in C++, but instead of getting an answer to the problem at hand, the comments went all crazy on my coding style, even when I indicated that it was a WIP piece of code and that I meant to clean it up later when I had the base case running. (I got so many down votes that I decided to pull the question, as my rep on SO already is near-abysmal) It made me wonder why people adopt such a hard line "you're a noob, go fuck yourself" attitude. I was being accused of writing C++ as if it was Java. Something that I cannot understand, and that still baffles me. I've been programming in quite a few OOP languages for a number of years now, albeit in intervals. I choose the language to use in terms of its available libraries and optimal execution environments for the job at hand. I adopt design patterns in OOP code and I'm quite confident that my use of patterns is sound and that OO wise, I can hold my own. I understand the OOP toolbox, but choose to use the tools only when I think it's really required, not to just use a neat trick to show my coding wits. (Which I know are not top notch, but I think aren't at n00b level either). I design my code before I write a single line. To define tests, I list the goals of a certain class, and the test criteria that it has to adhere to. Because it is easier to me to create sequence diagrams and then write code, I chose to write my tests after the interface has become obvious. I must admit that in the piece of code I posted in the question, I was still using pointers, instead of using smart pointers. I use RAII whenever I can. I know proper RAII means safeguarding against nullpointers, but I work incrementally. It was a work in progress and I meant to clean it up later. This way of working was condemned strongly. In my view, I should have a working example first so I can see if the base case is a viable way of thought. I also happen to think that cleaning up the code is something that is typical of the refactoring phase of agile, after the base case has been proven. I must admit that although I'm slowly getting the Cxx standard, I prefer to use what I understand, instead of taking the risk of using concepts that I have yet to master in production code. I do try new stuff once in a while, but usually in play projects that I have on the side, just for that purpose. [edit] I'd like to clarify that gnat's suggestion [1] did not show up in the search I did before I started to ask my question. However although his suggestion does cover one aspect of the question, the question he linked to does not answer the heart of my question, just part of it. My question is more about the response I got to my coding style and the professional aspects of handling different coding styles and (apparent) levels of skill. With my previous question on SO and it's response as a case in point. [/edit] The question then is: why scoff someone that doesn't use your coding style? The matters/subdivisions at hand for me are: Why would it be a bad programming practice to use more error prone code in prototype situations, if refactoring makes it more robust afterward? How would can program written in C++ be like it was written in Java? What makes it a bad program, (considering that I indicated the intent of the current style and the planned work to improve?) How would I be a bad professional if I chose to use a construct that is used in a certain programming paradigm (e.g. OOP/DP)? [1] Develop fast and buggy, then correct errors or be slow, careful for each line of code?
Without seeing the code in question, there are a few ways to write Java code in C++, some worse than others. At the one extreme, there's laying out your source like Java: everything in one file, everything within the class definition, etc.: class HelloWorldApp { public: void main() { cout << "Hello World!" << endl; } }; This is how Java source would be laid out. It's technically legal in C++, but putting everything in the header file and everything inline (by defining it in the class declaration) is terrible style and will kill your compile performance. Don't do it. Excessively OO - To oversimplify, in Java, it's the Kingdom of the Nouns , where everything is an object. Good (i.e., idiomatic) C++ code is more likely to use free functions, templates, etc., instead of trying to cram everything into an object. No RAII - You already mentioned this - using pointers and manual cleanup instead of smart pointers. C++ gives you tools like RAII and smart pointers, so good (i.e., idiomatic) C++ code uses those tools. No advanced C++ - The basics of Java and C++ are similar enough, but once you get into more advanced features (templates, C++'s algorithms library, etc.), they start to diverge. Except for #1, none of these make a C++ program a bad program, but it's also not the kind of code I prefer to work on as a C++ programmer. (I also wouldn't enjoy working with non-idiomatic or C-style Perl, non-idiomatic Python, etc.) A language has its own tools and idioms and philosophy, and good code uses those tools and idioms instead of trying to use the lowest common denominator or trying to reproduce another language's approach. Writing non-idiomatic code in a particular language / problem domain / whatever doesn't make someone a bad programmer, it just means that they have more to learn about that language / problem domain / whatever. And there's nothing wrong with that; there's a very long list of things I have more to learn about, and C++ in particular has an absolute ton of stuff to learn. Regarding the particular question of writing error-prone code with the intent to clean it up later, it's not black and white: If some prototype code fails to handle every possible exception and every possible corner case, then that's to be expected. Get it working, then get it working robustly. No problem. If some prototype code is written in what's simply a bad style or a bad design (bad for the given language and its idioms, a fundamentally bad design for the problem, etc.), then unless you're writing it as a throw-away proof-of-concept, you're not gaining anything. To use raw pointers versus smart pointers as an example, if you're going to work in C++, using RAII and smart pointers are fundamental enough that it should be faster to write code that way than to go back and clean it up later. Again, failing to do this doesn't mean someone's a bad programmer, unprofessional, etc., but it does mean there's more to learn.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/233788", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/26120/" ] }
233,961
It's very common to see Javascript bound to certain selectors to find elements, store data, and listen for events. It's also common to see these same selectors used for styling. jQuery (and its selector engine Sizzle) support and promote this by referencing elements with CSS-type syntax. As such, this technique is particularly difficult to 'unlearn' (or refactor) when building out projects. I've come to understand that this is a result of the history of HTML and Javascript development, and that browsers have been built to efficiently consume / parse / and render this sort of coupling. But as websites become increasingly complex, this reality can introduce difficulties in organizing and maintaining these separate layers. My question is: can and should this be avoided in modern websites? If I'm new to front-end development, and I wish to learn things 'the right way,' is it worth learning to decouple and avoid such dependencies from the start? Does this mean avoiding jQuery in favor of a library that promotes a more decoupled structure?
There is no way to avoid that. They are coupled because they interact with each other. If your javascript intends on doing any kind of DOM manipulation, then it needs a way to reference the DOM. There are various conventions for it. The Level 2 DOM API provides the getElementById, getElementByTagName, and getElementsByName methods. To this day these are the workhorses of any kind of DOM traversal. All other fancier jQuery selectors resolve into a combination of these eventually, and/or straight up traversal of each DOM node (this was the way to do getByClassName). There is no other shortcut. Javascript needs to know what to do and where. Typically, if an element has an id or class that is only relevant in scripting, a lot of people will prefix it with js- or some other obvious flag. Another newer convention is the data-attribute selection. <ul data-myapp-sortable="true"> jQuery('[data-myapp-sortable]').makeSortable(); The data-attribute is generally used for scripting purposes and selecting using that makes some sense. The drawback is that this is slower than using getElementById(). Another approach is the one used by angularJS, which creates a view-model. In this convention any kind of scripting functionality is specified by specially designated attributes like ng-disabled ng-href and many more. You don't add selectors in your javascript. The HTML document becomes the main authority on what is scripted and how, and the javascript works on it abstractly. It's a good approach, but obviously with a higher learning curve than the previous methods. And again, performance has to be considered. But don't ever think that you can write interactive HTML and javascript, and somehow have both those parts not know about the other. It's more about how can you limit the references to dependencies.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/233961", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/124886/" ] }
234,026
I am considering the creation of a fork to a small project licensed under GPLv2 , and I have some very specific questions I did not manage to answer in my research on various sites and forums. When I fork the code, I am forced to release the new project under the same license (GPL) as I will do, but could I also decide to release it under a dual license, one of them commercial? When forking the project, do I automatically own the copyright of the whole thing? This is relevant when, for example, deciding a future change in the license, or to be able to enforce the GPL license to a 3 rd party
The short answer: When you fork an existing project, you generally do not have permission to change the license nor do you get copyright on the code you copied over. You do have the copyright on any (nontrivial) modifications or additions that you make. The long answer: The only ways to get copyright on a piece of code is by writing it yourself or by contractually getting the copyright assigned to you. This means that forking an existing project doesn't change the copyrights on the code of either the original project or the fork. The only people who can change a copyright license are the holders of that copyright. If there are multiple copyright holders to the code of a project, then all copyright holders must agree to a change in the copyright license. This means that you don't have permission to change the license of your fork (not even to dual license it), unless the existing copyright license explicitly gives you the right to sublicense the code.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/234026", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/124953/" ] }
234,251
I'm working with web development since 2009, when I started with PHP. When I moved to ASP.NET, I've heard a lot about DDD and OOAD where a lot of focus is given to this "business logic" and "business rules". The point is that all the apps I've developed until now were all about CRUD operations and I've never seen these things in practice. I simply can't imagine what those things can really be in practice. So, what really is this business logic and how does this fit into an app? I know these are implemented as methods in domain models, but what could those methods possibly be, and where in the application they could possibly be used?
CRUD is an acronym that stands for Create, Read, Update and Delete. Those are the four basic operations that you can perform on a database tuple. But there's always more to business applications than creating, reading, updating and deleting database records. Let's start with some basic definitions, and then look at a couple of examples and see how those definitions map to the examples, and how they map to actual software. Business logic or domain logic is that part of the program which encodes the real-world business rules that determine how data can be created, stored, and changed. It prescribes how business objects interact with one another, and enforces the routes and the methods by which business objects are accessed and updated. Business Rules describe the operations, definitions and constraints that apply to an organization. The operations collectively form a process; every business uses these processes to form systems that get things done. Now, let's work with some examples. Transferring money from one checking account to another First, what are the things that you need to know (input)? The identity of the person making the transfer The amount of money to be transferred The source checking account number The target checking account number What are some of the "business rules" that must be applied? The person making the request must have the authority to do so. The transaction must be atomic . The transaction may have reporting requirements to the government, if it is over a certain amount By "atomic," I mean that the transaction must completely succeed or it must completely fail. You can't have account transactions where money is taken out of one account without arriving in the other (money disappears), or money is deposited into an account, but not debited from another account (money magically appears from nowhere). Ordering something from Amazon. What do you need to know? The identity of the person ordering Shipping information Billing information Method of Payment Amount and quantity of each item to ship How to ship (overnight, slow boat or super saver) State Tax Rate What happens after the order is placed? Items are pulled from stock On hand quantities are debited Items are packaged for shipment Out of stock items are backordered Items that drop below minimum quantities are ordered One shipment or two? An invoice/shipping list is printed, and placed with the order ..etc.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/234251", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/82383/" ] }
234,253
What makes CPU cache memory so much faster than main memory? I can see some benefit in a tiered cache system. It makes sense that a smaller cache is faster to search. But there must be more to it.
In the case of a CPU cache, it is faster because it's on the same die as the processor. In other words, the requested data doesn't have to be bussed over to the processor; it's already there. In the case of the cache on a hard drive, it's faster because it's in solid state memory, and not still on the rotating platters. In the case of the cache on a web site, it's faster because the data has already been retrieved from the database (which, in some cases, could be located anywhere in the world). So it's about locality , mostly. Cache eliminates the data transfer step. Locality is a fancy way of saying data that is "close together," either in time or space. Caching with a smaller, faster (but generally more expensive) memory works because typically a relatively small amount of the overall data is the data that is being accessed the most often. Further Reading Cache (Computing) on Wikipedia
{ "source": [ "https://softwareengineering.stackexchange.com/questions/234253", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/66745/" ] }
234,412
I just wanted to clear up a question I have. What is the point of having a private static method as opposed to a normal method with private visibility? I would have thought an advantage to having a static method is that it can be called without an instance of a class, but since its private is there even a point to it being static? The only reason I can think of is that it helps conceptually understanding the method on the class level as opposed to object level.
The characteristic of being static is independent of the visibility. The reasons that you will want to have a static method (some code that does not depend on non-static members) will still be useful. But maybe you don't want anyone/anything else to use it, just your class.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/234412", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/124723/" ] }
234,527
The basic idea behind OOP is that data and behavior (upon that data) are inseparable and they are coupled by the idea of an object of a class. Object have data and methods that work with that (and other data). Obviously by the principles of OOP, objects that are just data (like C structs) are considered an anti-pattern. So far so good. The problem is I have noticed that my code seems to be going more and more in the direction of this anti-pattern lately. Seems to me that the more I try to achieve information hiding between classes and loosely coupled designs, the more my classes get to be a mix of pure data no behavior classes and all behavior no data classes. I generally design classes in a way which minimizes their awareness of other classes' existence and minimizes their knowledge of other classes' interfaces. I especially enforce this in a top-down fashion, lower level classes don't know about higher level classes. E.g.: Suppose you have a general card game API. You have a class Card . Now this Card class needs to determine visibility to players. One way is to have boolean isVisible(Player p) on Card class. Another is to have boolean isVisible(Card c) on Player class. I dislike the first approach in particular as it grants knowledge about higher level Player class to a lower level Card class. Instead I opted for the third option where we have a Viewport class which, given a Player and a list of cards determines which cards are visible. However this approach robs both Card and Player classes of a possible member function. Once you do this for other stuff than visibility of cards, you are left with Card and Player classes which contain purely data as all functionality is implemented in other classes, which are mostly classes with no data, just methods, like the Viewport above. This is clearly against the principal idea of OOP. Which is the correct way? How should I go about the task of minimizing class interdependencies and minimizing assumed knowledge and coupling, but without winding up with weird design where all the low level classes contain data only and high level classes contain all the methods? Does anyone have any third solution or perspective on class design which avoids the whole problem? P.S. Here's another example: Suppose you have class DocumentId which is immutable, only has a single BigDecimal id member and a getter for this member. Now you need to have a method somewhere, which given a DocumentId returns Document for this id from a database. Do you: Add Document getDocument(SqlSession) method to DocumentId class, suddenly introducing knowledge about your persistence ( "we're using a database and this query is used to retrieve document by id" ), the API used to access DB and the like. Also this class now requires persistence JAR file just to compile. Add a some other class with method Document getDocument(DocumentId id) , leaving DocumentId class as dead, no behavior, struct-like class.
What you describe is known as an anemic domain model . As with many OOP design principles (like Law of Demeter etc.), it's not worth bending over backwards just to satisfy a rule. Nothing wrong about having bags of values, as long as they don't clutter the entire landscape and don't rely on other objects to do the housekeeping they could be doing for themselves . It would certainly be a code smell if you had a separate class just for modifying properties of Card - if it could be reasonably expected to take care of them on its own. But is it really a job of a Card to know which Player it is visible to? And why implement Card.isVisibleTo(Player p) , but not Player.isVisibleTo(Card c) ? Or vice versa? Yes, you can try to come up with some sort of a rule for that as you did - like Player being more high level than a Card (?) - but it's not that straightforward to guess and I'll have to look in more than one place to find the method. Over time it can lead to a rotten design compromise of implementing isVisibleTo on both Card and Player class, which I believe is a no-no. Why so? Because I already imagine the shameful day when player1.isVisibleTo(card1) will return a different value than card1.isVisibleTo(player1). I think - it's subjective - this should be made impossible by design . Mutual visibility of cards and players should better be governed by some sort of a context object - be it Viewport , Deal or Game . It's not equal to having global functions. After all, there may be many concurrent games. Note that the same card can be used simultaneously on many tables. Shall we create many Card instances for each ace of spade? I might still implement isVisibleTo on Card , but pass a context object to it and make Card delegate the query. Program to interface to avoid high coupling. As for your second example - if the document ID consists only of a BigDecimal , why create a wrapper class for it at all? I'd say all you need is a DocumentRepository.getDocument(BigDecimal documentID); By the way, while absent from Java, there are struct s in C#. See http://msdn.microsoft.com/en-us/library/ah19swz4.aspx http://msdn.microsoft.com/en-us/library/0taef578.aspx for reference. It's a highly object-oriented language, but noone makes a big deal out of it.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/234527", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/33026/" ] }
234,645
I've had a few programming jobs in the past where I was the only developer working on a project. After I've left, I typically get several emails a week from these companies, usually from the developer(s) who's replaced me there. These emails are usually asking for details about how things work and how I'd best go about implement feature x based on the existing system. I'm usually polite and helpful, but these kind of communications really start to eat into my time making every job I work on another weight around my ankle. Not to mention they're projects that I chose to leave behind me for a good reason. My question is, would it be professionally 'ok' to tell them I'm just not going to offer support any more and refuse and answer inquiries? NB. None of these companies are paying me any type of retainer and the inquiries are often informal questions from developers and not management.
You are in no way obligated to help them. Whether or not your former employer have realized it, they have taken a low cost/high risk decision by having only one developer work on the software. That was their (perhaps uninformed) decision, and now they are paying the price - you should not. If you feel like helping them, you should make a support agreement where they pay you for the time spent helping the new developer, in order for you to be compensated properly for your time.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/234645", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/59642/" ] }
234,657
Yesterday I was discussing with a "hobby" programmer (I myself am a professional programmer). We came across some of his work, and he said he always queries all columns in his database (even on/in production server/code). I tried to convince him not to do so, but wasn't so successful yet. In my opinion a programmer should only query what is actually needed for the sake of "prettiness", efficiency and traffic. Am I mistaken with my view?
Schema Changes Fetch by order --- If the code is fetching column # as the way to get the data, a change in the schema will cause the column numbers to readjust. This will mess up the application and bad things will happen. Fetch by name --- If the code is fetching column by name such as foo , and another table in the query adds a column foo , the way this is handled may cause problems when trying to get the right foo column. Either way, a schema change can cause problems with the extraction of the data. Further consider if a column that was being used is removed from the table. The select * from ... still works but errors out when trying to pull the data out of the result set. If the column is specified in the query, the query will error out instead giving a clear indiciation as to what and where the problem is. Data overhead Some columns can have a significant amount of data associated with them. Selecting back * will pull all the data. Yep, here's that varchar(4096) thats on 1000 rows that you've selected back giving you an additional possible 4 megabytes of data that you're not needing, but is sent across the wire anyways. Related to the schema change, that varchar might not exist there when you first created the table, but now its there. Failure to convey intent When you select back * and get 20 columns but only need 2 of them, you are not conveying the intent of the code. When looking at the query that does a select * one doesn't know what the important parts of it are. Can I change the query to use this other plan instead to make it faster by not including these columns? I don't know because the intent of what the query returns isn't clear. Lets look at some SQL fiddles that explore those schema changes a bit more. First, the initial database: http://sqlfiddle.com/#!2/a67dd/1 DDL: create table one (oneid int, data int, twoid int); create table two (twoid int, other int); insert into one values (1, 42, 2); insert into two values (2, 43); SQL: select * from one join two on (one.twoid = two.twoid); And the columns you get back are oneid=1 , data=42 , twoid=2 , and other=43 . Now, what happens if I add a column to table one? http://sqlfiddle.com/#!2/cd0b0/1 alter table one add column other text; update one set other = 'foo'; And my results from the same query as before are oneid=1 , data=42 , twoid=2 , and other=foo . A change in one of the tables disrupts the values of a select * and suddenly your binding of 'other' to an int is going to throw an error and you don't know why. If instead your SQL statement was select one.oneid, one.data, two.twoid, two.other from one join two on (one.twoid = two.twoid); The change to table one would not have disrupted your data. That query runs the same before the change and after the change. Indexing When you do a select * from you are pulling all the rows form all the tables that match the conditions. Even tables you really don't care about. While this means more data is transferred there's another performance issue lurking further down the stack. Indexes. (related on SO: How to use index in select statement? ) If you are pulling back lots of columns the database plan optimizer may disregard using an index because you are still going to need to fetch all those columns anyways and it would take more time to use the index and then fetch all of the columns in the query than it would be just to do a complete table scan. If you are just selecting the, say, last name of a user (which you do a lot and so have an index on it), the database can do an index only scan ( postgres wiki index only scan , mysql full table scan vs full index scan , Index-Only Scan: Avoiding Table Access ). There is quite a bit of optimizations about reading only from indexes if possible. The information can be pulled in faster on each index page because you're pulling less of it also - you're not pulling in all those other columns for the select * . It is possible for an index only scan to return results on the order of 100x faster (source: Select * is bad ). This isn't saying that a full index scan is great, its still a full scan - but its better than a full table scan. Once you start chasing down all the ways that that select * hurts performance you keep finding new ones. Related reading Confusion about proper use of * wildcard in SQL (Stack Overflow): select * vs select column (Stack Overflow): Why is SELECT * considered harmful?
{ "source": [ "https://softwareengineering.stackexchange.com/questions/234657", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/125682/" ] }
234,774
For the entire past year I've been written Scala code (coming from a Java background). I really liked how you could create simpler and cleaner code, with vals, case classes, map/filter/lambda functions, implicits and the type inference. I've used it mostly for an Akka -based application. This year I'm on a Scala project with a new team, who really like functional programming. They heavily use Scalaz , and the code is filled everywhere with applicatives, context bounds, reader/writer/state monad, even the main method is "wrapped" in an I/O monad. Their reasoning is that this makes the compiler "work for us" in asserting that the code is correct, and each function is free from side effects. Even so, from my point of view all this syntax really gets in the way of the business logic. For instance, a type of "MyBusinessObject" is fine, as well are types like "List[MyBusinessObject]", "Option[MyBusinessObject]" or even "Future[MyBusinessObject]". They all have a clear meaning and purpose. On the other hand, code like: def method[M[_]: Applicative] = { case (a, b) => (ca[M](a) |@| cb[M](b)) { case t @ (ra, rb) => if (ra.result && rb.result) t.right else t.left } } does it add complexity to the program, or is it just me that I'm not used to this way of programming?
This has nothing to do with functional programming - you can find this kind of situation in context of any other programming language - developers who love the advanced constructs of "their" language so much that they ignore any common sense about readability and keeping things simple. I have encountered such a situation in C, C++, Perl, Java, C#, Basic, and other non-functional languages. It's not functional programming that adds complexity to code - programmers do. Don't get me wrong, I don't recommend avoiding advanced language features - but it's important to find the right balance in the given context. When writing a generic library for the usage of >100,000 developers all over the world, there are different measures to apply as when you are writing an individual report generator just for your local office.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/234774", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/4701/" ] }
234,787
Let's say you work for a company and what you do is develop software for them. You have no idea of the big picture or maybe slight. What you do have are tasks assigned to you via issue tracking system. You're given tasks, you make them work the way the task describes them, you send them back. Like adding 2 integers: function add(a,b){return a + b;} But later, as the project goes forward, you notice that as add becomes more complex, you realize that it should have needed some form of architecture, more than just a function that adds parameters and returns a value. However, you didn't know that. In the first place, all they needed was that simple add . You didn't expect add to become so complex. The project progresses with more features, which you didn't expect to have in the first place. And in the end, you keep piling up hacks and layer upon layer of functions to avoid breaking/rewriting the existing code. How do you deal with these situations? How do you fight technical debt as "the lowest developer"? Clarification: You're the "implementer", the lowest in the hierarchy. You see the problem, but have no say with the matter. I'm not quantifying technical debt or looking for tools. Regarding the third "duplicate" Refactoring & Rewrite - You are locked to your tasks. You are not paid to do extra. Architecture Overview - You know the overall system, but no idea of the architecture. Code Freeze - Not your call. You're not management. Modularization - No idea of architecture. Modules change as requirements change. Automated Tests - None exists.
Every time you notice something like that, enter a new ticket into your issue tracking system. Make a habit to use issue tracker as a primary tool to communicate stuff like that, because from there, it will be easy to pick, evaluate and prioritize for your senior colleagues / lead / manager / whoever is responsible for tracking the issues in your project. Use the right tool for the job. I do it always and strongly recommend you do the same. As an example, here is a ticket I created about a month ago. Upon completion of particular feature I discovered that code became substantially more complicated than it was before but I can't fix that within deadline given for feature implementation. (Names of the features, tickets and code used in the real tracker is obscured, but the text is copied as is). Summary: simplify design involving ParticularPieceOfCode Description: In the course of implementation per TICKET-12345, code involving usage of ParticularPieceOfCode accrued a bit of complication and became rather difficult to read, understand and maintain (see example code snippet below). Find a way to simplify it. An example of code which would be desirable to redesign can be found in ClassName#methodName : <a piece of code like one behind the right door here:> FWIW my advice applies independently of what "level" you are. I've been using it at your current ("lowest") level and I am using it now that my level is quite far from "lowest" and I have satisfactory "say" as you call it, and I am going to use it always no matter what. Just think of it, no level, no matter how much authority you have, there just can't be no better way. If you "say" hey we've got an issue , it's only air rattling. And even if your boss / lead agrees and says you're right, we've got an issue , this changes nothing - it's only air rattling yet again, and it can't be anything else. You may think that having your say written (eg in email) would be better, but if you think of it, it really isn't. If your project has substantial mail activity, what was written will be lost and long forgotten a month later. Use the right tool for the job. For the job you describe, issue tracker is exactly the right tool. You notice the issue, you enter it into system designed for tracking these and it takes care of the rest, in the best possible way - simply because it was designed for that : computer software package that manages and maintains lists of issues , as needed by an organization... commonly used... to create, update, and resolve reported customer issues, or even issues reported by that organization's other employees... An issue tracking system is similar to a " bugtracker ", and often, a software company will sell both, and some bugtrackers are capable of being used as an issue tracking system, and vice versa. Consistent use of an issue or bug tracking system is considered one of the "hallmarks of a good software team" 1 ... Whatever other means you would want to choose to communicate, having a ticket in tracker will only make it easier for you. Even if you prefer to rattle the air , saying "I'd want to discuss TICKET-54321..." makes a more solid starting point than "Listen I'd want to talk about some piece of code I dealt with a while ago..." And you can safely pass the references to ticket by mail: even if mail gets lost, the issue will still be there in the tracker, with all the details you wanted to tell about.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/234787", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/56020/" ] }
234,860
I recently had a discussion with a friend of mine about OOP in video game development. I was explaining the architecture of one of my games which, to my friend's surprise, contained many small classes and several abstraction layers. I argued that this was the result of me focusing on giving everything a Single Responsibility and also to loosen the coupling between components. His concern was that the large number of classes would translate to a maintenance nightmare. My view was that it would have the exact opposite effect. We proceeded to discuss this for what seemed like centuries, eventually agreeing to disagree, saying that perhaps there were cases where SOLID principles and proper OOP didn't actually mix well. Even the Wikipedia entry on SOLID principles states that they're guidelines that help to writing maintainable code and that they are part of an overall strategy of agile and adaptive programming. So, my question is: Are there cases in OOP where some or all of the SOLID principles do not lend themselves to clean code? I can imagine right away that the Liskov Substitution Principle could possibly conflict with another flavour of safe inheritance. That is to say, if someone devised another useful pattern implemented through inheritance, it is quite possible the LSP might be in direct conflict with it. Are there others? Perhaps certain types of projects or certain target platforms work better with a less SOLID approach? Edit: I'd just like to specify that I'm not asking how to improve my code ;) The only reason I mentioned a project in this question was to give a little context. My question is about OOP and design principles in general . If you're curious about my project, see this . Edit 2: I imagined this question would be answered in one of 3 ways: Yes, there exist OOP design principles which partially conflict with SOLID Yes, there exist OOP design principles which completely conflict with SOLID No, SOLID is the bee's knees and OOP will forever be better with it. But, as with everything, it's not a panacea. Drink responsibly. Options 1 and 2 would have likely generated long and interesting answers. Option 3, on the other hand, would be a short, uninteresting, but overall reassuring, answer. We seem to be converging onto option 3.
Are there cases in OOP where some or all of the SOLID principles do not lend themselves to clean code? In general, no. History has shown that the SOLID principles all largely contribute to increased decoupling, which in turn has been shown to increase flexibility in code and thus your ability to be accommodating of change as well as making the code easier to reason about, test, reuse... in short, make your code cleaner. Now, there can be cases where the SOLID principles collide with DRY (don't repeat yourself), KISS (keep it simple stupid) or other principles of good OO design. And of course, they can collide with the reality of requirements, the limitations of humans, the limitations of our programming languages, other obstacles. In short, SOLID principles will always lend themselves to clean code, but in some scenarios they'll lend themselves less than conflicting alternatives. They're always good, but sometimes other things are more good.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/234860", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/81495/" ] }
235,082
To keep our house in order, I want to automatically assemble licenses for project dependencies in our documentation, rather than having to add them manually. Does anybody know a simple way to traverse programmatically a set of CSPROJ files and extract the license information for the referenced packages as a link or string?
One way I know to get such information is by using PowerShell in the Package Manager Console , from within Visual Studio . The Package Manager Console is a PowerShell console within Visual Studio used to interact with NuGet and automate Visual Studio. Basically you can use the Get-Package cmdlet to get a list of packages referenced in a specific project (or in an entire Solution). Regarding the license information for each package, for what I've seen you can only get the license URL and not just a short string representing the license type. Here's an example for a Solution of mine returning a list of entries, each one consisting of the package identifier and the link to the license: Get-Package | Select-Object Id,LicenseUrl The output is something like this: Other elements that can be returned are documented in the Nuspec reference , in the metadata section (e.g. the version of the package, a short description, etc.).
{ "source": [ "https://softwareengineering.stackexchange.com/questions/235082", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/88941/" ] }
235,094
According to Microsoft documentation, the Wikipedia SOLID principe article, or most IT architects we must ensure that each class has only one responsibility. I would like to know why, because if everybody seems to agree with this rule nobody seems to agree about the reasons of this rule. Some cite better maintenance, some say it provides easy testing or makes the class more robust, or security. What is correct and what does it actually mean? Why does it make maintenance better, testing easier or the code more robust?
Modularity. Any decent language will give you the means to glue together pieces of code, but there's no general way to unglue a large piece of code without the programmer performing surgery on the source. By jamming a lot of tasks into one code construct, you rob yourself and others of the opportunity to combine its pieces in other ways, and introduce unnecessary dependencies that could cause changes to one piece to affect the others. SRP is just as applicable to functions as it is to classes, but mainstream OOP languages are relatively poor at gluing functions together.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/235094", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/21623/" ] }
235,096
We have quite a lot of places in the source code of our application , where one class has many methods with same names and different parameters. Those methods always have all the parameters of a 'previous' method plus one more. It's a result of long evolution (legacy code) and this thinking (I believe): " There is a method M that does thing A. I need to do A + B. OK, I know ... I will add a new parameter to M, create a new method for that, move code from M to the new method with one more parameter, do the A + B over there and call the new method from M with a default value of the new parameter. " Here is an example (in Java-like-language): class DocumentHome { (...) public Document createDocument(String name) { // just calls another method with default value of its parameter return createDocument(name, -1); } public Document createDocument(String name, int minPagesCount) { // just calls another method with default value of its parameter return createDocument(name, minPagesCount, false); } public Document createDocument(String name, int minPagesCount, boolean firstPageBlank) { // just calls another method with default value of its parameter return createDocument(name, minPagesCount, false, ""); } public Document createDocument(String name, int minPagesCount, boolean firstPageBlank, String title) { // here the real work gets done (...) } (...) } I feel like this is wrong. Not only that we can't keep adding new parameters like this forever, but the code hard to extend/change because of all the dependencies between methods. Here are few ways how to do this better: Introduce a parameter object: class DocumentCreationParams { String name; int minPagesCount; boolean firstPageBlank; String title; (...) } class DokumentHome { public Document createDocument(DocumentCreationParams p) { // here the real work gets done (...) } } Set the parameters to the DocumentHome object before we call createDocument() @In DocumentHome dh = null; (...) dh.setName(...); dh.setMinPagesCount(...); dh.setFirstPageBlank(...); Document newDocument = dh.createDocument(); Separate the work into different methods and call them as needed: @In DocumentHome dh = null; Document newDocument = dh.createDocument(); dh.changeName(newDocument, "name"); dh.addFirstBlankPage(newDocument); dh.changeMinPagesCount(new Document, 10); My questions: Is the described problem really a problem? What do you think about suggested solutions? Which one would you prefer (based on your experience)? Can you think of any other solution?
Maybe try the builder pattern ? (note: fairly random Google result :) var document = new DocumentBuilder() .FirstPageBlank() .Name("doc1final(2).doc") .MinimumNumberOfPages(4) .Build(); I cannot give a full rundown of why I prefer builder over the options you give, but you have identified a large problem with a lot of code. If you think you need more than two parameters to a method you probably have your code structured wrongly (and some would argue one!). The problem with a params object is (unless the object you create is in some way real) you just push the problem up a level, and you end up with a cluster of unrelated parameters forming the 'object'. Your other attempts look to me like someone reaching for the builder pattern but not quite getting there :)
{ "source": [ "https://softwareengineering.stackexchange.com/questions/235096", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/27403/" ] }
235,196
I wonder why frameworks/libraries have their own helpers although they exist natively already. Let's take jQuery and AngularJS . They have their own each iterator functions: jQuery.each() angular.forEach() But we have Array.prototype.forEach . Similarly, jQuery.parseJSON() angular.fromJson() But we have the JSON.parse() function in vanilla JavaScript.
Because when those libraries were written, some major browsers did not support those features. Once written and used, these features cannot be removed from these libraries without breaking many applications. (In this case, "major browser" means a browser that still has large market share, which includes older versions of browsers like Internet Explorer, where large number of users don't necessarily upgrade to the latest version.)
{ "source": [ "https://softwareengineering.stackexchange.com/questions/235196", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/125900/" ] }
235,313
I'm confused about some of the notations of UML class diagrams. Pretty sure I know what Association means. Any relationship between instances of two classes, where an instance of one class needs to know about an instance of the second class in order to perform it's work - is an Association relationship. An Association often means class A has a reference (field) to an instance of class B. However, I'm having trouble understanding what the Aggregation and Composition arrows mean. Part of my confusion was caused by encountering different definitions of these notations. Two definitions of the Aggregation notation: Definition 1: An Aggregation notation between two classes is suitable whenever an instance of class A holds a collection of instances of class B (e.g. a List, Array, whatever). Definition 2: An Aggregation link between two classes is suitable if an instance of class A holds a reference to an instance of class B, and the B instance is dependent on the lifecycle of the A instance. Meaning: When the instance of class A get's deleted, so will the instance of class B. The instance of class B is entirely contained by the instance of class A, as opposed to the instance of class A simply owning a reference to the instance of class B (which is regular Association). Regarding what the Composition notation means and how it differs from the Aggregation notation, I'm not sure. Please clarify the definitions and help me understand. Concrete examples would be welcome.
The three links Association, Aggregation and Composition form a kind of scale on how closely two classes are related to each other. On the one end of the scale, there is Association, where objects of the two classes can know about each other, but they do not affect each others lifetime. The objects can exist independently and which class A object knows about which class B objects can vary over time. On the other end of the scale, there is Composition. Composition represents a part -- whole relationship such that class B is an integral part of class A. This relationship is typically used if objects of class A can't logically exist without having a class B object. The Aggregation relation is somewhere between those two ends, but nobody seems to agree where exactly, so there is also no universally agreed definition of what an Aggregation means. In that sense, both definitions that you found are correct and if you ask 10 people, you risk getting 11 different definitions.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/235313", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/121368/" ] }
235,527
I've searched about this here and on StackOverflow and found some differences between the two. But I'm still not sure in what cases one would prefer a Singleton, and in what cases one would choose to use a static class . (In languages which don't support 'static classes', like Java, I'm obviously referring to classes containing only static methods and fields). Please give me concrete examples of cases where you would pick each one, and explain why.
A case where a static class might be a good idea is when you want to collect related pieces of functionality, but you don't need to have any internal state in any object. An example could be the Math class in Java. It contains a whole bunch of related functions that are accessed outside the context of any specific object instance. I've done similar things where a set of common utility functions that are used in multiple places in an application are grouped together into a single utility class. A singleton is used when you do want an actual object (with its own internal state and everything), and you want to limit your system to exactly one instance of that object. This might be useful if you have some kind of shared resource, such as a database, an in-memory cache, or maybe some specialized piece of hardware like a robotic arm. Many parts of your program might want to use this resource and you might want to have all access to the resource go through a single point. A singleton isn't always the only way to handle these situations, but it's one of the few places I think a singleton might be a good choice.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/235527", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/121368/" ] }
235,558
This is a newbie question, but I couldn't find a newbie-proof enough answer on Google. What do people mean when they say 'state' - in programming in general, and in OO programming specifically? Also, what is mutable and immutable state - again, generally in programming and also specifically in OOP?
You have state when you associate values (numbers, strings, complex data structures) to an identity and a point in time. For example, the number 10 by itself does not represent any state: it is just a well-defined number and will always be itself: the natural number 10. As another example, the string "HELLO" is a sequence of five characters, and it is completely described by the characters it contains and the sequence in which they appear. In five million years from now, the string "HELLO" will still be the string "HELLO": a pure value. In order to have state you have to consider a world in which these pure values are associated to some kind of entities that possess an identity . Identity is a primitive idea: it means you can distinguish two things regardless of any other properties they may have. For example, two cars of the same model, same colour, ... are two different cars. Given these things with identity, you can attach properties to them, described by pure values. E.g., my car has the property of being blue. You can describe this fact by associating the pair ("colour", "blue") to my car. The pair ("colour", "blue") is a pure value describing the state of that particular car. State is not only associated to a particular entity, but also to a particular point in time. So, you can say that today, my car has state ("colour", "blue") Tomorrow I will have it repainted in black and the new state will be ("colour", "black") Note that the state of an entity can change, but its identity does not change by definition. Well, as long as the entity exists, of course: a car may be created and destroyed, but it will keep its identity throughout its lifetime. It does not make sense to speak about the identity of something that does not exist yet / any more. If the values of the properties attached to a given entity change over time, you say that the state of that entity is mutable . Otherwise, you say that the state is immutable . The most common implementation is to store the state of an entity in some kind of variables (global variables, object member variables), i.e. to store the current snapshot of a state. Mutable state is then implemented using assignment: each assignment operation replaces the previous snapshot with a new one. This solution normally uses memory locations to store the current snapshot. Overwriting a memory location is a destructive operation that replaces a snapshot with a new one. ( Here you can find an interesting talk about this place-oriented programming approach.) An alternative is to view the subsequent states (history) of an entity as a stream (possibly infinite sequence) of values, see e.g. Chapter 3 of SICP . In this case, each snapshot is stored at a different memory location, and the program can examine different snapshots at the same time. Unused snapshots can be garbage-collected when they are no longer needed. Advantages / disadvantages of the two approaches Approach 1 consumes less memory and allows to construct a new snapshot more efficiently since it involves no copying. Approach 1 implicitly pushes the new state to all the parts of a program holding a reference to it, approach 2 would need some mechanism to push a snapshot to its observers, e.g. in the form of an event. Approach 2 can help to prevent inconsistent state errors (e.g. partial state updates): by defining an explicit function that produces a new state from an old one it is easier to distinguish between snapshots produced at different points in time. Approach 2 is more modular in that it allows to easily produce views on the state that are independent of the state itself, e.g. using higher-order functions such as map and filter .
{ "source": [ "https://softwareengineering.stackexchange.com/questions/235558", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/121368/" ] }
235,707
I am working on a project where I am trying to decide between using a standard SQL relational database or JSON objects to store data about an event or activity. The project will store data on multiple event types so I have decided to just describe one event type for this question. The live music event (described in full using the JSON schema at the bottom of this question) is an object that stores data such as where the event will take place, the time/date of the event and the cost of the event. The live music event object has both one-to-one (event--> name, event --> description) and one-to-many (event--> venues, event--> dates, event--> ticket types) relationships. Furthermore, the event object can contain one or more performer IDs, which link to the performer object. The performer object stores data on musicians who are performing at the live music event. The data will be queried by users using both simple ("Find me events with 'x' name") and complex ("Find me events with 'x' music genre and 'y' cost within a radius of 'z' from my current location") queries. The data will be submitted by users using a web form. As you can probably tell from the defined JSON schema, I was originally going to use JSON objects to store this data but I've heard from some people who say that because my data is purely relational, I should stick to the older methods. I would appreciate any thoughts on the pros and cons of each approach given my needs. If you need anything clarified, please feel free to ask. { "event": { "eventID":{ "type":"string" }, "eventType":{ "type":"array", "eventTypeItem":{ "type":"string" } }, "eventName":{ "type":"string" }, "eventDescription":{ "type":"string" }, "eventVenueList":{ "type":"array", "eventVenueListID":{ "type":"integer" } }, "eventURL":{ "type":"string" }, "eventTwitter":{ "type":"string" }, "eventFB":{ "type":"string" }, "eventInstagram":{ "type":"string" }, "eventEmail":{ "type":"string", "format":"email" }, "eventContactPerson":{ "type":"string" }, "eventDoorTime": { "type":"string", "format":"date-time" }, "eventPerformerIDList":{ "type":"array", "liveMusicPerformerID":{ "type":"integer" } }, "eventSetList":{ "type":"array", "eventPerformerID":{ "type":"integer" }, "eventPerformerStartTime":{ "type":"string", "format":"date-time" }, "eventPerformerEndTime":{ "type":"string", "format":"date-time" } }, "eventDateList": { "type":"array", "eventDateItem": { "type":"string", "format":"date-time" } }, "eventDateStartTime": { "type":"string", "format":"date-time" }, "eventDateEndTime": { "type":"string", "format":"date-time" }, "eventTicket":{ "type":"array", "eventTicketType":{ "type":"string" }, "eventTicketLowPrice":{ "type":"number" }, "eventTicketHighPrice":{ "type":"number" }, "eventDatesAdvancePrice": { "type":"number" } } }, "performer": { "performerID": { "type":"integer" }, "performerType": { "type":"string" }, "performerName": { "type":"string" }, "performerAlternateName": { "type":"array", "performerAlterateNameItem":{ "type":"string" } }, "performerGenreList": { "type":"array", "performerGenreItem":{ "type":"string" } }, "performerURL": { "type":"string" } } }
I think your question really boils down to: When should I use a NoSQL approach vs. RDBMS? You settled on JSON early (a NoSQL-ish decision), perhaps because you've got Ajax consumers. The answer of course to when to use NoSQL approaches vs. RDBMS's is basically about what type of data you're working with and what consumers you anticipate having. If your data is essentially relational (fairly flat hierarchies, no weird data types like images or audio, predictable relationships between the schemas that can be easily described in keys), and your consumers are anticipated to eventually include people who want to do Business Intelligence queries (ad hoc querying), then an RDBMS is the way to go. It's fairly easy to turn a query into a JSON representation, so it doesn't significantly burden your Ajax consumers -- it just adds a little transformation coding into your endpoints (REST/SOAP/whatever). Conversely , if your data is very hierarchical (deep schemas), contains weird data types like images, audio, video, etc., there are few relationships between entities, and you know that your end users will not be doing BI, then NoSQL/storing JSON may be appropriate. Of course, even these general guidelines aren't rock solid. The reason Google developed Google File System, MapReduce (work which was used by Doug Cutting to build Hadoop at Yahoo) and later BigQuery (a NoSQL oriented [schemaless] way of managing large scale data) was precisely because they had a lot of ad hoc BI requests, and they couldn't get relational approaches to scale up to the tera/peta/exa/zetta/yotta scales they were trying to manage. The only practical approach was to scale out, sacrificing some of ad-hoc-query user friendliness that an RDBMS provides, and substituting a simple algorithm (MapReduce) that could be coded fairly easily for any given query. Given your schema above, my question would basically be: Why wouldn't you use an RDBMS? I don't see much of a reason not to. Our profession is supposed to be engineering oriented, not fashion oriented, so our instinct should be to pick the easiest solution that works, right? I mean, your endpoints may have to do a little translation if your consumers are Ajaxy, but your data looks very flat and it seems likely that business users are going to want to do all kinds of ad hoc querying on things like music events (Which event was most attended within 50 miles of our capital city last year?) 'Go not to the elves for counsel, for they will say both no and yes.' -- Frodo
{ "source": [ "https://softwareengineering.stackexchange.com/questions/235707", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/124563/" ] }
235,872
Lately I've been reading about Hypermedia as the Engine of Application State (HATEOAS), the constraint that is claimed to make a web API "truly RESTful". It boils down to basically including links with every response to the possible transitions you can make from the current state. Let me illustrate what HATEOAS is based on my understanding - and please do correct me if I missed something. / GET: { "_links": { "child": [ { "href": "http://myapi.com/articles", "title": "articles" } ] } } /articles?contains=HATEOAS GET: { "_items": [ { "uri": "http://myapi.com/articles/0", "title": "Why Should I Care About HATEOAS?" }, { "uri": "http://myapi.com/articles/1", "title": "HATEOAS: Problem or Solution?" } ], "_links": { "self": { "href": "http://myapi.com/articles", "title": "articles" }, "parent": { "href": "http://myapi.com/", "title": "home" } } } POST: { "title": "A New Article", "body": "Article body", "tags": [ "tag1", "tag2" ] } /articles/0 GET: { "title": "Why Should I Care About HATEOAS?", "body": "Blah blah blah" "tags": [ "REST", "HATEOAS" ], "_links": { "self": { "href": "http://myapi.com/articles/0", "title": "article" }, "parent": { "href": "http://myapi.com/articles", "title": "articles" } } } HATEOAS is claimed to provide two major benefits: The entire service is discoverable starting form the root URI, documentation is no longer needed. The client is decoupled from the server which can now change the URI structure freely. This eliminates the need for API versioning. But in my view, a service is a lot more than its URI structure. To use it effectively, you also need to know: what query parameters you can use and their possible values the structure of the JSON/XML/whatever documents you need to send in your POST/PATCH/etc requests the structure of the response sent by the server the possible errors that might occur ... Based on the above, HATEOAS only solves a tiny fraction of the discoverability and coupling problems. You still need to document the above four aspects and clients will still be strongly coupled to the server because of them. To avoid breaking clients, you still need to version your API. The only benefit it provides is that you can change your URL structure more or less freely (by the way, what happened to the principle "Cool URIs don't change" ?). Is my understanding correct?
I think your instincts are largely correct; those proclaimed benefits really aren't all that great, as for any non-trivial web application the clients are going to have to care about the semantics of what they're doing as well as the syntax. But that doesn't mean that you shouldn't make your application follow the principles of HATEOAS! What does HATEOAS really mean? It means structuring your application so that it is in principle like a web site , and that all operations that you might want to do can be discovered without having to download some complex schema. (Sophisticated WSDL schemas can cover everything, but by the time they do, they've exceeded the ability of virtually every programmer to ever understand, let alone write! You can view HATEOAS as a reaction against such complexity.) HATEOAS does not just mean rich links. It means using the HTTP standard's error mechanisms to indicate more exactly what went wrong; you don't have to just respond with “waaah! no” and can instead provide a document describing what was actually wrong and what the client might do about it. It also means supporting things like OPTIONS requests (the standard way of allowing clients to find out what HTTP methods they can use) and content type negotiation so that the format of the response can be adapted to a form that clients can handle. It means putting in explanatory text (or, more likely, links to it) so that clients can look up how to use the system in non-trivial cases if they don't know; the explanatory text might be human readable or it might be machine readable (and can be as complex as you want). Finally, it means that clients do not synthesise links (except for query parameters); clients will only use a link if you told it to them. You have to think about having the site browsed by a user (who can read JSON or XML instead of HTML, so a little weird) with a great memory for links and an encyclopædic knowledge of the HTTP standards, but otherwise no knowledge of what to do. And of course, you can use content type negotiation to serve up an HTML(5)/JS client that will let them use your application, if that's what their browser is prepared to accept. After all, if your RESTful API is any good, that should be “trivial” to implement on top of it?
{ "source": [ "https://softwareengineering.stackexchange.com/questions/235872", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/36622/" ] }
235,976
When I look at things like Twitter, it seems like the idea is so simple to implement initially that the founder does not have to be very technically talented. Basically it's just a guy with a good idea. But when an app / software blows up and entails much harder engineering problems, how does the founder deal with it? Have we seen cases in which the original guy with the good idea somehow falls off the enterprise as it becomes more about technical challenges and less about ideas?
When you get large enough that scaling well really matters and you have to start dealing with things like caching and database tuning, hopefully you are making enough money that you can hire somebody who specializes in performance tuning (or even better, a team of people, each specialized in a different sub-area). When a startup begins, each founder has to do a bit of everything. I'm a coder, but I help with marketing and do some of the accounts, because there are just not enough hands to have everybody do only the thing they are best at. You want a small number of generalists. In an established business, you want everybody just doing the thing they are best at. If you have a gap in expertise, you fill it with somebody who has that expertise. You want a large number of specialists.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/235976", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/106432/" ] }
236,105
Factory pattern (or at least the use of FactoryFactory.. ) is the butt of many jokes, like here . Apart from having verbose and "creative" names like RequestProcessorFactoryFactory.RequestProcessorFactory , is there anything fundamentally wrong with the factory pattern if you have to program in Java/C++ and there is a usecase for Abstract_factory_pattern ? How would another popular language (for example, Ruby or Scala ) avoid having to use it while managing similar complexity? The reason I am asking is I see mostly the criticism of factories mentioned in the context of the Java/ Java EE ecosystem, but they never explain how other languages/frameworks solve them.
Your question is tagged with "Java", no surprise you're asking why is the Factory pattern being mocked: Java itself comes with a nicely packaged abuses of that pattern. For example try loading an XML document from a file and run an XPath query against it. You need something like 10 lines of code just to setup the Factories and Builders: DocumentBuilderFactory builderFactory = DocumentBuilderFactory.newInstance(); DocumentBuilder builder = builderFactory.newDocumentBuilder(); Document xmlDocument = builder.parse(new FileInputStream("c:\\employees.xml")); XPath xPath = XPathFactory.newInstance().newXPath(); xPath.compile(expression).evaluate(xmlDocument); I wonder if the guys designing this API ever worked as developers or did they just read books and throw things around. I understand they just didn't want to write a parser themselves and left the task to others but it still makes for an ugly implementation. Since you're asking what's the alternative here's loading the XML file in C#: XDocument xml = XDocument.Load("c:\\employees.xml"); var nodes = xml.XPathSelectElements(expression); I suspect newly hatched Java developers see the Factory madness and think it's OK to do that - if the geniuses who built Java themselves have used them that much. Factory and any Other patterns are tools, each suited for a particular job. If you apply them to jobs they aren't suited for you're bound to have ugly code.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/236105", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/106039/" ] }
236,261
So while I've been doing some lengthy compiles I decided to take the C++ general test on ODesk and came across this question. If I'm not mistaken, given the wording (or lack thereof) all of these could be true. a. int Foo() { } int Foo(int bar) { } b. Well, return void; would be incorrect semantically but functions can obviously have void return types. void Foo() { } c. This is the definition of inline functions, yes. d. Without going into much detail about the placement of the following elements, typedef void (*Func)(int); Func functions[2]; void Foo(int bar) { } void Bar(int foo) { } functions[0] = &Foo; functions[1] = &Bar; Further, you could always do this using lambdas and functors . e. void Foo(int& bar) { ++bar; } int foobar = 5; Foo(foobar); f. int bar = 5; int& GetBar() { return bar; } GetBar() = 6; g. int bar = 5; int* GetBar() { return &bar; } (*GetBar()) = 5; I fail to see where this question has any truly false answers. Am I missing something? Needless to say I ran out of time and failed the whole thing. I guess I'm a bad C++ programmer. :(
This whole question is dodgy. The question statement implies the possibility of multiple choices, while the radio buttons indicates a single choice. Furthermore, b is pretty suspect, as void functions don't return anything. D is also questionable, for as far as I know, you cannot have an array of functions. Sure, you can have an array of function pointers, but thats not exactly the same thing.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/236261", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/47136/" ] }
236,277
The project I'm currently working on has an issue: bugs and tasks are often assigned to people who are too new or too inexperienced and their work ends up producing more bugs down the road. The problem is that parts of our software are much more "dangerous" to work on than others due to code quality issues. I've been trying to combat this issue by estimating the risk associated with tasks and paying close attention to which developers get assigned which tasks. We use JIRA so I started labeling issues to keep track of this estimation. I've noticed that I've ended up using several metrics to categorize a bug/task: How clear/straightforward it is. E.g. whether it's something that will need a lot of design work or just a simple UI bug fix. How maintainable the affected area of the code is. Is it a well-designed area or a big ball of mud. How much of the program I think will be affected by the required change. My labels are kind of messy since I didn't have a clear idea when I started what the possible categories would be and I still don't. I'm thinking about requesting a new field be added (something like "Risk") so that we can require an estimate before assigning the work to someone. Has anyone dealt with this sort of thing before?
One of the failings of most bug tracking approaches is they only deal with one side of the equation - the end user's view of the system. This is a critical bug to fix to this can wait a week (priority), this bug is painful to this is an s in a pluralization glitch (severity). A blog post describing multidimensional bug tracking looks at addressing this including the developer view: PEF and REV. The PEF values are the user's view: P ‍ain - how painful is the bug when it is encountered? E ‍ffort - how much effort does it take to work around? F ‍requency - how often does the bug occur? The REV side is from the developer's view: R ‍isk - how risky is the fix? E ‍ffort - how much effort will it take to fix? V ‍erifiability - how easy is it to verify that the bug is fixed? Each of these is measured on a 1..9 scale with 1 being low/easy and 9 being high/hard. The numbers are added together to give a score for PEF and REV. The part that addresses the bits described: How clear/straightforward it is. E.g. whether it's something that will need a lot of design work or just a simple UI bug fix. How maintainable the affected area of the code is. Is it a well-designed area or a big ball of mud. How much of the program I think will be affected by the required change. These factor into the effort and risk described in in REV. Yes, it is something that's been fought with before. I have (in the past) used this model for custom fields in Redmine and it was reasonably successful. The big advantage of this comes in when you compare the PEF and REV scores. If you have a PEF of 21 and a REV of 7, thats something that can be a big win. While a PEF of 7 and REV of 21 is something that should be avoided for awhile because the risk and effort side likely outweigh the benefit fixing it. One can then look at the REV score and assign things with low Risk to the less experienced developers (low risk, high effort are often ideal for this situation).
{ "source": [ "https://softwareengineering.stackexchange.com/questions/236277", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/117009/" ] }
236,415
I know absolutely nothing in low-level stuff, so this will be a very newbie question. Please excuse my ignorance. Is machine language - the series of numbers to that tell the physical computer exactly what to do - always binary? I.e. always composed of only zeros one ones? Or could it also be composed of numbers such as 101, 242, 4 etc.?
Everything in a computer (to be precise, in any typical contemporary computer) is binary, at a certain level. "1s and 0s" is an abstraction, an idea we use to represent a way of distinguishing between two values. In RAM, that means higher and lower voltage. On the hard drive, that means distinct magnetic states, and so on. Using Boolean logic and a base 2 number system, a combination of 1s and 0s can represent any number, and other things (such as letters, images, sounds, etc) can be represented as numbers. But that's not what people mean when they say "binary code." That has a specific meaning to programmers: "Binary" code is code that is not in text form. Source code exists as text; it looks like a highly formalized system of English and mathematical symbols. But the CPU doesn't understand English or mathematical notation; it understands numbers. So the compiler translates source code into a stream of numbers that represent CPU instructions that have the same underlying meaning as the source code. This is properly known as "machine code," but a lot of people call it "binary".
{ "source": [ "https://softwareengineering.stackexchange.com/questions/236415", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/121368/" ] }
236,726
In a non-agile development team a lead developer generally : Sets the standard (coding and otherwise) Researches new technologies for the team Sets the technical direction for the team Has the final say on matters Designs the architecture of a system However an agile team works differently: An agile team will rely on emergent design, rather than up front An agile team designs together, rather than design being dictated by one person An agile team decides on their own technical direction, which is best to deliver a project Where does this leave a lead developer in an agile team? Is it possible to have a lead developer in an agile team? Does a agile team demand different responsibilities from a lead?
Nothing in agile changes how the lead developer should function. They should be involving the rest of the team with system architecture decisions, and technical direction no matter what development model is being followed. Handing out decisions by edict is a terrible way for any development team to run. Agile just makes getting buy in from the rest of the team a more explicit process, and a lead developer should have been doing that anyway. Just because there isn't a set lead developer role in a scrum methodology doesn't mean the more experienced programmers opinions aren't the most respected. Agile is not letting everyone go wild on their own thing and then trying to stick it all together, there is still a unified vision and direction that needs to be set.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/236726", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/12540/" ] }
236,733
I have an eCommerce platform that creates a basket as soon as the user lands on the site if they don't already have one (from session or cookies). It then stores the basket information in session and in cookies to remember what the user has added to their basket for a month. The problem is if they aren't capable of having a session or cookies, such as a bot, then it creates a basket every time they visit a page. This means a bot crawling through the website can easily create hundreds of baskets. One of the options we're exploring is changing our code so that a basket is only created when a user adds an item, and so far it seems the best option, but it's also the most time consuming option as a lot of the code base assumes a basket exists at all times. That would need to be changed so that it checks for a basket and handles a basket not existing. We'd like to find a less intensive solution if possible. Another option we've explored is periodically clearing out the table in the database of all old baskets to help mitigate the issue. However this is just mitigating, and not solving the issue. I'd like to solve the problem if I can. How can I identify if the user is incapable of having a session and cookies so that I can stop the basket from being created? Or is there a better way of dealing with bots in this instance?
Nothing in agile changes how the lead developer should function. They should be involving the rest of the team with system architecture decisions, and technical direction no matter what development model is being followed. Handing out decisions by edict is a terrible way for any development team to run. Agile just makes getting buy in from the rest of the team a more explicit process, and a lead developer should have been doing that anyway. Just because there isn't a set lead developer role in a scrum methodology doesn't mean the more experienced programmers opinions aren't the most respected. Agile is not letting everyone go wild on their own thing and then trying to stick it all together, there is still a unified vision and direction that needs to be set.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/236733", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/110802/" ] }
236,995
I've spent most of the last several years working mainly with C# and SQL. Every programmer I've worked with over that time was in the habit of placing the opening brace of a function or control flow statement on a new line. So ... public void MyFunction(string myArgument) { //do stuff } if(myBoolean == true) { //do something } else { //do something else } I have always been struck by how space wasteful this is, especially in if/else statements. And I know alternatives exist in later versions of C#, like: if(myBoolean == true) //do something on one line of code But hardly anyone used them. Everyone did the curly-brace-on-newline thing. Then I got back into doing JavaScript after a long absence. In my memory, JavaScript developers used to do the exact same curly-brace-newline thing but with all the fancy new libraries and stuff, most developers put the opening brace after the declaration: function MyJavaScriptFunction() { //do something } You can see the sense in this, because since using closures and function pointers has become popular in JavaScript, it saves a lot of space and makes things more readable. So I wondered why it wasn't seen as the done thing in C#. In fact, if you try the above construct in Visual Studio 2013, it actually reformats it for you, putting the opening brace on a new line! Now, I just saw this question on Code Review SE: https://codereview.stackexchange.com/questions/48035/questions-responses-let-me-tell-you-about-you In which I learned that in Java, a language I'm not overly familiar with, it's considered de-rigour to open your curly braces right after the declaration, in modern JavaScript fashion. I had always understood that C# was originally modelled after Java, and kept to a lot of the same basal coding standards. But in this instance, it seems not. So I presume there must be a good reason: what is the reason? Why do C# developers (and Visual Studio) enforce opening curly brackets on a new line?
The brace at the end of the line is the ancient K&R C standard, from Brian Kernighan and Dennis Ritchie's book The C Programming Language , which they published in 1978 after co-inventing the UNIX operating system and the C programming language (C was mostly designed by Ritchie, based on B, which another Bell employee Ken Thompson had adapted from the older BCPL programming language), at AT&T. There used to be flame wars about "the one true brace style." So Ritchie created the C language, and Kernighan wrote the first tutorial, when computer displays only showed a few lines of text. In fact, UNICS (later UNIX) development started on a DEC PDP-7, which used a typewriter, printer and paper tape for a user interface. UNIX and C were finished on the PDP-11, with 24-line text terminals. So vertical space was indeed at a premium. We all have slightly better displays and higher resolution printers today, right? I mean, I don't know about you, but I have three 24" 1080p displays in front of me right now. :-) Also, so much of that little book The C Programming Language is code samples that putting the braces at the ends of the lines instead of on their own lines allegedly saved an appreciable amount of money on printing. What is truly important is consistency throughout a project, or at least within a given source code file. There are also scientific studies showing that the brace on its own line (indented to the same level as the code, in fact) improves code comprehension despite what people think they think of the aesthetics. It makes it very clear to the reader, visually and instinctively, which code runs in which context. if( true ) { // do some stuff } C# has always supported evaluating a single command after a branching expression, by the way. In fact, that's the only thing it does, even now. Putting code in braces after a branching expression just makes that one command a goto (the compiler creates scope using jmp instructions). C++, Java, C# and JavaScript are all more or less based on C, with the same underlying parsing rules, for the most part. So in that sense, C# is not "based on Java." Summing up, this is a bit of a religious/flame-war issue. But there are studies making it pretty clear that arranging code in blocks improves human comprehension. The compiler couldn't care less. But this is also related to the reason why I never put a line of code after a branch without braces--it's just too easy for me or another programmer to slap another line of code in there later and slip on the fact that it will not execute in the same context with the line right before or after it. EDIT : Just go look at the Apple goto fail bug for a perfect example of this exact issue, which had very serious real world consequences. if( true ) doSomething(); becomes... if( true ) doSomething(); doSomethingElse(); In this case, doSomethingElse() executes every time, regardless of the outcome of the test, but because it is indented to the same level as the doSomething() statement, it's easy to miss. This isn't really arguable; studies back this up. This is a big source of bugs introduced into source code during maintenance. Still, I'll admit that JavaScript closure syntax looks a little silly with braces on their own lines...aesthetically. :-)
{ "source": [ "https://softwareengineering.stackexchange.com/questions/236995", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/22742/" ] }
237,014
C# has the decimal type which is used for numbers that needs exact representation in base 10. For instance, 0.1 cannot be represented in base 2 (e.g. float and double ) and will always be an approximation when stored in variables that are of these types. I was wondering if the reversed fact was also possible. Are there numbers that are not representable in base 10 but can be represented in base 2 (in which case I would want to use a float instead of a decimal to handle them)?
Here's the key to your quandary: 10 is the product of 2 and 5 . You can represent any number exactly in base 10 decimals that is k * 1/2 n * 1/5 m where k , n and m are integers. Alternatively phrased - if the number n in 1/n contains a factor that is not part of the factors of the base, the number will not be able to be represented exactly in a fixed number of digits in the binary/decimal/whatever expansion of that number - it will have a repeating part. For example 1/15 = 0.0666666666.... because 3 (15 = 3 * 5) is not a factor of 10. Thus, anything that is able to be represented in base 2 exactly (k * 1/2 n ) can be represented in base 10 exactly. Beyond that, there is the issue of how many digits/bits are you using to represent the number. There are some numbers that are able to be exactly represented in some base, but it takes more than some number of digits/bits to do. In binary, the number 1/10 which is conveniently 0.1 in decimal isn't able to be represented as a number that can be represented in a fixed number of bits in binary. Instead, the number is 0.00011001100110011... 2 (with the 0011 part repeating forever). Lets look at the number 1 2 /1010 2 a bit more closely. ____ 0.00011 +--------- 1010 | 1.00000 0 -- 1 0 0 ---- 1 00 ---------+ 0 | ----- | 1 000 | 0 | ------ | repeating 1 0000 | block 1010 | ------ | 1100 | 1010 | ---- | 100 ----+ This is exactly the same type of thing you get when you try to do the long division for 1/3. 1/10, when factored is 1/(2 1 * 5 1 ). For base 10 (or any multiple of 10), this number terminates and is known as a regular number . A decimal expansion that repeats is known as a repeating decimal , and those numbers that go on forever without repeating are irrational numbers. The math behind this delves into Fermat's little theorem ... and once you start saying Fermat or theorem, it becomes a Math.SE question . Are there numbers that are not representable in base 10 but can be represented in base 2? The answer is 'no'. So, at this point we should all be clear that every fixed length binary expansion of a rational number can be represented as a fixed length decimal expansion. Lets look more closely at the decimal in C# which leads us to Decimal floating point in .NET and given the author, I'll accept that thats how it works. The decimal type has the same components as any other floating point number: a mantissa, an exponent and a sign. As usual, the sign is just a single bit, but there are 96 bits of mantissa and 5 bits of exponent. However, not all exponent combinations are valid. Only values 0-28 work, and they are effectively all negative: the numeric value is sign * mantissa / 10 exponent . This means the maximum and minimum values of the type are +/- (2 96 -1), and the smallest non-zero number in terms of absolute magnitude is 10 -28 . I'll point out right away that because of this implementation there are numbers in the double type that cannot be represented in decimal - those that are out of the range. Double.Epsilon is 4.94065645841247e-324 which can't be represented in a decimal , but can in a double . However, within the range that decimal can represent, it has more bits of precision than other native types and can represent them without error. There are some other types floating around. There is a BigInteger in C# which can represent an arbitrarily large integer. There is no equivalent to Java's BigDecimal (which can represent numbers up with decimal digits of up to 2 32 digits long - which is a sizable range) exactly . However, if you poke around a bit you can find hand rolled implementations. There are some languages that also have a rational data type which allows you to exactly represent rationals (so that 1/3 is actually 1/3). Specifcally for C# and the choice of float or rational, I'll defer to Jon Skeet from the Decimal floating pint in .NET : Most business applications should probably be using decimal rather than float or double. My rule of thumb is that manmade values such as currency are usually better represented with decimal floating point: the concept of exactly 1.25 dollars is entirely reasonable, for example. For values from the natural world, such as lengths and weights, binary floating point types make more sense. Even though there is a theoretical "exactly 1.25 metres" it's never going to occur in reality: you're certainly never going to be able to measure exact lengths, and they're unlikely to even exist at the atomic level. We're used to there being a certain tolerance involved.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/237014", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/128098/" ] }
237,115
(Note: I used 'error' instead of 'problem' in the title for obvious reasons.. ;) ). I did some basic reading on Traits in Scala. They're similar to Interfaces in Java or C#, but they do allow for default implementation of a method. I was wondering: can't this cause a case of the "diamond problem", which is why many languages avoid multiple inheritance in the first place? If so, how does Scala handle this?
The diamond problem is the inability to decide which implementation of the method to choose. Scala solves this by defining which implementation to choose as part of the language specifications( read the part about Scala in this Wikipedia article ). Ofcourse, same order definition could also be used in class multiple inheritance, so why bother with traits? The reason IMO is constructors. Constructors have several limitations that regular methods don't have - they can only be called once per object, they have to be called for each new object, and a child class' constructor must call it's parent's constructor as it's first instruction(most languages will do it implicitly for you if you don't need to pass parameters). If B and C inherit A, and D inherits B and C, and both B and C's constructors call A's constructor, then D's constructor will call A's constructor twice. Defining which implementations to choose like Scala did with methods won't work here because both B's and C's constructors must be called. Traits avoid this problem since they don't have constructors.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/237115", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/121368/" ] }
237,126
Linked lists, as far as I have seen, are largely implemented using object-oriented ideas. (having an object that holds some information and the address of the next link). How were Linked-lists implemented before the object-oriented paradigm came out? Were they only invented(?) once the OOP was developed?
Linked list have nothing to do with OOP, in fact they predate OOP by quite a bit. Linked list are implemented simply by having a recursive structure, this is in my opinion conceptually easiest to understand in assembly -- you allocate some memory, and the first bytes of that memory serve as a pointer to the next/previous. In assembly you don't have to worry about the "type" and just think of it as another pointer, so the fact that it is recursive is not something you need to think about -- you don't have to think about how something can refer to itself in its definition.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/237126", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/125576/" ] }
237,245
I am running a project where I pay developers to contribute to my semi open source project. My problem is that it is easy to hire developers, but it is very difficult to get anyone to do code reviewing. I have repeatedly attempted to get the senior developers to review the code of the junior developers. I have attempted to pay them more, give them higher status etc. But they just seem to want to code instead of reviewing code. Now I am considering forcing all developers that want to code, to also review code, but I have a feeling that it is a bad idea. My question is, therefore, how can I motivate capable people to review the code of others? How is this done in successful open source projects like Linux, PostgreSQL, LibreOffice etc.
Code review is a solution to a problem. Do you have a problem and will "Code Review" solve it? Are the other people checking in bad code? My guess is they are to some degree, but maybe your other coders don't think it is so bad that it is worth the time/effort to do a review. Ask your senior devs to come up with a solution to limit the amount of bad code checked into the system. They may come up with more effective solutions. Solve the problem of having bad code (Code review is one of the pieces to the puzzle). One way to motivate developers is to stop the adding of new functionality until bugs/bad code are fixed. Most devs like building new stuff. Make the seniors fix bugs and refactor bad code. Maybe they'll learn it is easier to catch this in the code review. Again, identify a problem and give your senior people a chance to come up with a solution. You can't keep paying people who don't do what you tell them, so make sure those things are very important. It helps if everyone agrees there is a problem and contributes to the solution. Eventually, you have to make them accountable. They don't do code review, they take a lesser position or get fired. In the future, hire people willing to do code review or at least let them know this is part of the job so they can make an informed choice. Edit If you are having problems with your jr. dev's code, your seniors may feel the quality is so low, it would be faster to rewrite it than go through a review and correction process. It would be important to stress the long-term benefits of taking the time now to review, in order to give feedback to junior developers to make them better in the future.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/237245", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/5094/" ] }
237,286
What was the reasoning behind not explicitly storing an array's length with an array in C ? The way I see it, there are overwhelming reasons to do so but not very many in support of the standard (C89). For instance: Having length available in a buffer can prevent buffer overrun. A Java-style arr.length is both clear and avoids the programmer from having to maintain many int s on the stack if dealing with several arrays Function parameters become more cogent. But perhaps the most motivating reason, in my opinion, is that usually, no space is saved without keeping the length. I would venture to say that most uses of arrays involve dynamic allocation. True, there may be some cases where people use an array allocated on the stack, but that's just one function call* - the stack can handle 4 or 8 bytes extra. Since the heap manager has to track the free block size used up by the dynamically allocated array anyway, why not make that information usable (and add the additional rule, checked at compile time, that one can't manipulate the length explicitly unless one would like to shoot oneself in the foot). The only thing I can think of on the other side is that no length tracking may have made compilers simpler, but not that much simpler. *Technically, one could write some kind of recursive function with an array with automatic storage, and in this (very elaborate) case storing the length may indeed result in effectively more space usage.
C arrays do keep track of their length, as the array length is a static property: int xs[42]; /* a 42-element array */ You can't usually query this length, but you don't need to because it's static anyway – just declare a macro XS_LENGTH for the length, and you're done. The more important issue is that C arrays implicitly degrade into pointers, e.g. when passed to a function. This does make some sense, and allows for some nice low-level tricks, but it loses the information about the length of the array. So a better question would be why C was designed with this implicit degradation to pointers. Another matter is that pointers need no storage except the memory address itself. C allows us to cast integers to pointers, pointers to other pointers, and to treat pointers as if they were arrays. While doing this, C is not insane enough to fabricate some array length into existence, but seems to trust in the Spiderman motto: with great power the programmer will hopefully fulfill the great responsibility of keeping track of lengths and overflows.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/237286", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/128300/" ] }
237,513
When working through the book "Implementing Domain Driven Design" by Vaughn Vernon, I have been unable to gain a good grasp on what a bounded context actually is. The book defines a bounded context as "a conceptual boundary where a domain model is applicable. It provides Ubiquitous Language that is spoken by the team and expressed in its carefully designed software model" (the "Guide to this Book" prefacing section). This definition would make it sound as though a bounded context is the model and language of a subdomain, where that subdomain may happen to be the core domain (which seems like it ought to be referred to as a "core subdomain", but that is another discussion...). This still leaves some ambiguity as to what a bounded context provides. Is it a grouping of one or more subdomains? If only one subdomain corresponds to a bounded context, what is the bounded context actually telling us? Chapter 3 of the same book, however, refers to the integration techniques between bounded contexts. This, however, would seem to imply that the bounded contexts are actually software systems or artefacts of some variety. Martin Fowler briefly discusses the idea of a bounded context ( http://martinfowler.com/bliki/BoundedContext.html ), but does not really clarify the issue. At the end of the day, what is a bounded context? Is it a grouping of subdomains? The model and language for a subdomain? The implementation of a subdomain? Without these answers, it seems rather difficult to understand how to decompose a real-life problem space into bounded contexts.
Bounded Contexts and Subdomains exist at different levels. A Subdomain is a portion of the problem space, it's a natural partitioning of the system, often reflecting the structure of the organisation. So logistics and operations might be separated from invoicing & billing . Eric differentiates core , supporting and generic subdomains according to their business relevance in the given scenario. Contexts are portions of the solutions space. They're models . It would be a good thing to have them, reflect the domains-subdomains partitioning ...but life isn't always that easy. And you might have bloated legacy domain encompassing everything, or more context in the same subdomain (i.e. old legacy app the replacement app somebody is building). To have a Bounded Context you need to have a model, and an explicit boundary around it. Exactly what's missing in many data-driven application that use databases to share data. Another - orthogonal - way to see it may be the following. Ubiquitous Language , the special condition where every term has a single unambiguous definition, doesn't scale. The more you enlarge it, the more ambiguity creeps in. If you want to achieve precise, unambiguous models, you need to make their boundaries explicit, and speak many little Ubiquitous Languages, each one within a single Bounded Context, with a well defined purpose.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/237513", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/71537/" ] }
237,537
I just got back from a conference in Boston called An Event Apart . A really popular theme amongst the speakers was the idea of progressive enhancement - a site's content should go in the HTML, and JavaScript should only be used to enhance behavior. The arguments that the speakers gave for progressive enhancement were very compelling. Not only is it a solid pattern for supporting older browsers, and devices on a network with low bandwidth, but HTML fails much more gracefully than JavaScript (i.e. markup that is not supported is just ignored, while if a browser throws an exception while executing your script - you are hosed). Jeremy Keith gave a particularly insightful talk about this. But what about single page web apps like Backbone and Angular? The whole design behind these frameworks seems to push the developer toward moving content out of the HTML, and into something like a JSON API. I can not seem to gel these two design patterns: progressive enhancement vs. single page web apps. Are there instances when one is better than the other? Or are they not even antagonistic technologies, and I am missing something here with my mental model?
It seems to me that single-page apps draw a line in the sand of progressive enhancement. Where before we might try to work around the fact that implementations and features vary between browsers going back for decades, SPAs assume that there's a certain baseline that we can reasonably agree most visitors of a given site will meet. I don't think the two are at odds. You can still continue to progressively enhance after the SPA starts, like starting with a <video> tag, then layering your own feature-rich player on top of that. Then there are visitors with scripting disabled, but they know what they're getting into. I don't see why developers should bend over backwards for those visitors, aside from a "You need scripting for this site" note. If we allow that, why not also cater to visitors with CSS disabled? How about images disabled? These are core web technologies. They should not expect to have a fully functional web experience when they go picking and choosing pieces. To ensure I don't get away without a car analogy, I should not expect my car to work if I decide I don't like certain features. I could tell civil engineers, "I disabled my headlights, so please make sure to install street lights every 125 feet everywhere I might visit." Without headlights, my car would work a lot of the time, but some places I'll be unable to visit.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/237537", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/114877/" ] }
237,609
Consider the following: if(a == b or c) In most languages, this would need to be written as: if(a == b or a == c) which is slightly cumbersome and repeats information. I know my above sample syntax is slightly clunky, but I am sure there are better ways to convey the idea. Why don't more languages offer it? Is there performance or syntax issues?
The syntax issue is – that it requires syntax. Whatever syntax your language has, people using the language have to learn it. Otherwise they run the risk of seeing code and not knowing what it does. Thus it's generally considered a good thing if a language has a simple syntax that cleanly handles a lot of cases. In your specific example, you are trying to take an infix operator (a function that takes two arguments but is written Argument1 Operator Argument2 ) and trying to extend it to multiple arguments. That doesn't work very cleanly because the whole point of infix operators, to the extent that there is one, is to put the operator right in between the 2 arguments. Extending to (Argument1 Operator Argument2 MagicallyClearSymbol Argument3...) doesn't seem to add a lot of clarity over Equals(Arg1,Arg2,...) . Infix is also typically used to emulate mathematical conventions that people are familiar with, which wouldn't be true of an alternate syntax. There would not be any particular performance issues associated with your idea, other than that the parser would have to deal with a grammar with another production rule or two, which might have a slight effect on the speed of parsing. This might make some difference for an interpreted or JIT compiled language, but probably not a big difference. The bigger problem with the idea is just that making lots of special cases in a language tends to be a bad idea .
{ "source": [ "https://softwareengineering.stackexchange.com/questions/237609", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/97416/" ] }
237,697
I'm sure designers of languages like Java or C# knew issues related to existence of null references (see Are null references really a bad thing? ). Also implementing an option type isn't really much more complex than null references. Why did they decide to include it anyway? I'm sure lack of null references would encourage (or even force) better quality code (especially better library design) both from language creators and users. Is it simply because of conservatism - "other languages have it, we have to have it too..."?
Disclaimer: Since I don't know any language designers personally, any answer I give you will be speculative. From Tony Hoare himself: I call it my billion-dollar mistake. It was the invention of the null reference in 1965. At that time, I was designing the first comprehensive type system for references in an object oriented language (ALGOL W). My goal was to ensure that all use of references should be absolutely safe, with checking performed automatically by the compiler. But I couldn't resist the temptation to put in a null reference, simply because it was so easy to implement. This has led to innumerable errors, vulnerabilities, and system crashes, which have probably caused a billion dollars of pain and damage in the last forty years. Emphasis mine. Naturally it didn't seem like a bad idea to him at the time. It's likely that it's been perpetuated in part for that same reason - if it seemed like a good idea to the Turing Award-winning inventor of quicksort, it's not surprising that many people still don't understand why it's evil. It's also likely in part because it's convenient for new languages to be similar to older languages, both for marketing and learning curve reasons. Case in point: "We were after the C++ programmers. We managed to drag a lot of them about halfway to Lisp." -Guy Steele, co-author of the Java spec (Source: http://www.paulgraham.com/icad.html ) And, of course, C++ has null because C has null, and there's no need to go into C's historical impact. C# kind of superseded J++, which was Microsoft's implementation of Java, and it's also superseded C++ as the language of choice for Windows development, so it could've gotten it from either one. EDIT Here's another quote from Hoare worth considering: Programming languages on the whole are very much more complicated than they used to be: object orientation, inheritance, and other features are still not really being thought through from the point of view of a coherent and scientifically well-based discipline or a theory of correctness. My original postulate, which I have been pursuing as a scientist all my life, is that one uses the criteria of correctness as a means of converging on a decent programming language design—one which doesn’t set traps for its users, and ones in which the different components of the program correspond clearly to different components of its specification, so you can reason compositionally about it. [...] The tools, including the compiler, have to be based on some theory of what it means to write a correct program. -Oral history interview by Philip L. Frana, 17 July 2002, Cambridge, England; Charles Babbage Institute, University of Minnesota.[ http://www.cbi.umn.edu/oh/display.phtml?id=343] Again, emphasis mine. Sun/Oracle and Microsoft are companies, and the bottom line of any company is money. The benefits to them of having null may have outweighed the cons, or they may have simply had too tight a deadline to fully consider the issue. As an example of a different language blunder that probably occurred because of deadlines: It's a shame that Cloneable is broken, but it happens. The original Java APIs were done very quickly under a tight deadline to meet a closing market window. The original Java team did an incredible job, but not all of the APIs are perfect. Cloneable is a weak spot, and I think people should be aware of its limitations. -Josh Bloch (Source: http://www.artima.com/intv/bloch13.html )
{ "source": [ "https://softwareengineering.stackexchange.com/questions/237697", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/33996/" ] }
237,738
My coworker likes to see all of our unit tests pass (as would anyone) and he argues that some of the failing tests of a system he wrote are unnecessary "noise" since they all involve the validity of the data and a separate system in our stack is supposed to be responsible for validation. Personally, I think keeping the tests allows you to at least understand/acknowledge some of the failing points even if you consciously choose not to fix them. Is he right in wanting them removed since they are "noise" compared to the other tested functionality and will probably never be fixed? Just for context. Here are some concrete examples. They all involve code paths that are used, but passing non-validated data into the routines causes unexpected behaviors.: passing in NaN results in a false positive. passing in a value that overflows a double's significant figures results in false when true should have been returned. Rounding should not occur. "139.9999999999999" -> Double.TryParse() -> 139.9999999999999 "139.99999999999999" -> Double.TryParse() -> 140
Let me start off with a few assertions: Tests are used to show you where your code fails. Tests should cover all scenarios for each unit that you want to test. The testresults should give a clear overview of what went wrong. Now on to your scenario: you acknowledge that these tests are written in a location where they don't belong. I assume from your story that you have a separate project that tests all the validation, so why don't you just move these tests to that project? The downsides from keeping these tests where they don't belong: Tests are fractured throughout projects even though they have a designated place to keep them together. The overview over what failing tests are relevant and what ones are consciously ignored becomes harder and you might miss a few new breaking tests. That being said: removing tests just because you want to see them all pass is not the answer. The preferred solution is obviously to fix them but when this is not possible then you might want to look into classifying them inside a separate project. It should also be noted that duplicate tests aren't good either. If you have two tests in two different projects that test the exact same thing then you have to do upkeep for two while only getting benefit from one. If you test your validation in the ValidationTest project and you do the very same in your other project, then you should remove those in your other project and keep things contained to where they are supposed to be. Lastly, you might also want to make use of the testing framework's possibilities. For example MSTest will allow you to add an [Ignore] attribute to tests that should be ignored. This will result in a nice green bar with an additional category of ignored tests, thus keeping them prevalent.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/237738", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/44414/" ] }
237,749
Eric Lippert made a very interesting point in his discussion of why C# uses a null rather than a Maybe<T> type : Consistency of the type system is important; can we always know that a non-nullable reference is never under any circumstances observed to be invalid? What about in the constructor of an object with a non-nullable field of reference type? What about in the finalizer of such an object, where the object is finalized because the code that was supposed to fill in the reference threw an exception? A type system that lies to you about its guarantees is dangerous. That was a bit of an eye-opener. The concepts involved interest me, and I've done some playing around with compilers and type systems, but I never thought about that scenario. How do languages that have a Maybe type instead of a null handle edge cases such as initialization and error recovery, in which a supposedly guaranteed non-null reference is not, in fact, in a valid state?
That quote points to a problem that occurs if the declaration and assignment of identifiers (here: instance members) are separate from each other. As a quick pseudocode sketch: class Broken { val foo: Foo // where Foo and Bar are non-nullable reference types val bar: Bar Broken() { foo = new Foo() throw new Exception() // this code is never reached, so "bar" is not assigned bar = new Bar() } ~Broken() { foo.cleanup() bar.cleanup() } } The scenario is now that during construction of an instance, an error will be thrown, so construction will be aborted before the instance has been fully constructed. This language offers a destructor method which will run before the memory is deallocated, e.g. to manually free non-memory resources. It must also be run on partially constructed objects, because manually managed resources might already have been allocated before construction was aborted. With nulls, the destructor could test whether a variable had been assigned like if (foo != null) foo.cleanup() . Without nulls, the object is now in an undefined state – what is the value of bar ? However, this problem exists due to the combination of three aspects: The absence of default values like null or guaranteed initialization for the member variables. The difference between declaration and assignment. Forcing variables to be assigned immediately (e.g. with a let statement as seen in functional languages) is an easy was to force guaranteed initialization – but restricts the language in other ways. The specific flavor of destructors as a method that gets called by the language runtime. It is easy to choose another design that does not exhibit these problems, for example by always combining declaration with assignment and having the language offer multiple finalizer blocks instead of a single finalization method: // the body of the class *is* the constructor class Working() { val foo: Foo = new Foo() FINALIZE { foo.cleanup() } // block is registered to run when object is destroyed throw new Exception() // the below code is never reached, so // 1. the "bar" variable never enters the scope // 2. the second finalizer block is never registered. val bar: Bar = new Bar() FINALIZE { bar.cleanup() } // block is registered to run when object is destroyed } So there is not an issue with the absence of null, but with the combination a set of other features with an absence of null. The interesting question is now why C# chose one design but not the other. Here, the context of the quote lists many other arguments for a null in the C# language, which can be mostly summarized as “familiarity and compatibility” – and those are good reasons.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/237749", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/935/" ] }
237,955
We have a university programming course and fellow students are asking some programming questions in our Facebook group. I'm a little hesitant to share all of my programs, especially if it's something cool for one of the assignments, since these are looked at by the TAs and they might notice if somebody has the same program and start asking questions on where it came from. Still, sometimes I'd like to share my code to help others, but I don't wanting people just grabbing my work. (Clarification: We are allowed to collaborate with the tasks) This is of course a thin line. While I want to help some people, I'm concerned that they may not have the academic honesty to rewrite the code on their own. Most of my fellow students are not very advanced in their skills, so I'd get away with say hiding my name in Base-64 encoded string crafted into a discrete place. Still, it may be too obvious for a random string to be sitting around. What options exist to hide my name in a program without it looking suspicious? I've seen over at CodeGolf that they have made ascii art turn into other things when evaluated. Are there similar strategies I could utilize? The ideal solution would be something that looks like something that fits in discreetly but in reality has a function to prove that I coded it from the beginning. Clarification: (Sorry, should have said this earlier) We're allowed to collaborate but have to explain our programs to the TAs to get the points. It's just for satisfaction to hide some Easter Eggs in other's code if it leaks out, especially since it may be tempting to exchange programs to check that the answers to problems they generate are equivalent etc or to see how others solve the problem.
Use your signature not in your code, but in a publicly accessible development log. Publish your code at a public Github repo. Include a Docblock with your name in the "Author" field. This way there's a public record of you being the actual author of the program. This may not qualify as "hiding", but in my opinion, it does. If a student decides to copy your code, they will think that they only have to swap their signature for yours. Imagine their surprise when you present public evidence of their wrongdoing. To make matters worse for them, it's accessible online. You can even use an online tool, such as Diffchecker , to demonstrate which parts of your code were stolen! EDIT: As pointed out in the comments, make sure your school allows to share your work this way! OP indicated that their institution is OK with this, but yours might not be!
{ "source": [ "https://softwareengineering.stackexchange.com/questions/237955", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/127868/" ] }
238,033
I don't know what scalar means exactly, but I'm trying to see if I'm thinking about it correctly. Does scalar relate to arbitrariness where the type of data could be any type, or a system is not able to know what the data is in advance.
The term "scalar" comes from linear algebra , where it is used to differentiate a single number from a vector or matrix. The meaning in computing is similar. It distinguishes a single value like an integer or float from a data structure like an array. This distinction is very prominent in Perl, where the $ sigil (which resembles an 's') is used to denote a scalar variable and an @ sigil (which resembles an 'a') denotes an array. It doesn't have anything to do with the type of the element itself. It could be a number, character, string, or object. What matters to be called a scalar is that there is one of them.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/238033", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/57479/" ] }
238,162
I know I have quite frequently heard that C typically has a performance advantage over C++. I didn't really think anything else of it until I realized that MSVC doesn't even seem to support the newest standard of C, but the newest it supports it C99 (as far as I know). I was planning on writing a library with some code to render in OpenGL so I could reuse it. I was planning to write the library in C since any performance increase is welcome when it comes to graphics. But would it really be worth it? The code using the library would likely be written in C++ and I prefer to code in C++ generally. However, if it would produce even a small difference in performance, I would likely go with C. It may also be noted that this library would be something that I would make to work across Windows/OS X/Linux, and I would likely compile everything natively (MSVC for Windows, Clang or GCC for OS X, and GCC for Linux...or possibly Intel's compilers for everything). I've looked around and I've found some benchmarks and such, but everything I've seen has dealt with GCC rather than MSVC and Clang. Also, the benchmarks don't mention the standards of the languages used. Anyone have any thoughts on this? EDIT: I just wanted to share my viewpoint on this question after a couple years more experience. I ended up writing the project I was asking this question for in C++. I started another project around the same time in C as we were looking to get out any small amount of performance we could and needed the project to be linkable in C. A couple months ago, I reached the point where I really needed maps and advanced string manipulation. I knew of the abilities for this in the C++ standard library and eventually came to the conclusion that those structures in the standard library would likely outperform and be more stable than maps and strings I could implement in C in a reasonable amount of time. The requirement to be linkable in C was easily satisfied by writing a C interface to the C++ code, which was done quickly with opaque types. Rewriting the library in C++ seemed to go much faster than when writing it in C and was less prone to bugs, especially memory leaks. I was also able to use the standard library threading library, which has been much easier than using platform-specific implementations. In the end, I believe writing the library in C++ led to great benefits with possibly a small performance cost. I haven't benchmarked the C++ version yet, but I believe that it may even be possible that I have gained some performance by using standard library data structures than ones I wrote.
I guess people often claim that C is faster than C++ because it's easier to reason about performance in C. C++ is not inherently slower or faster, but certain C++ code might obscure hidden performance penalties. For example, there can be copies and implicit conversions which are not immediately visible when looking at some piece of C++ code. Let's take the following statement: foo->doSomething(a + 5, *c); Let's further assume that doSomething has the following signature: void doSomething(int a, long b); Now, let's try to analyse this particular statement's possible performance impact. In C, the implications are quite clear. foo can only be a pointer to a struct, and doSomething must be a pointer to a function. *c dereferences a long, and a + 5 is integer addition. The only uncertainty comes from the type of a : If it's not an int, there will be some conversion. but apart from that, it's easy to quantify the performance impact of this single statement. Now let's switch to C++. The same statement can now have very different performance characteristics: doSomething could be a non-virtual member function (cheap), virtual member function (a little more expensive), std::function , lambda... etc. What's worse, foo could be a class type overloading operator-> with some operation of unkown complexity. So in order to quantify the cost of calling doSomething , it's now necessary to know the exact nature of foo and doSomething . a could be an integer, or a reference to an integer (additional indirection), or a class type which implements operator+(int) . The operator could even return another class type which is implicitly convertible to int . Again, the performance cost is not apparent from the statement alone. c could be a class type implementing operator*() . It could also be a reference to a long* etc. You get the picture. Due to C++'s language features, it is much harder to quantify a single statement's performance costs than it is in C. Now in addition, abstractions like std::vector , std::string are commonly used in C++, which have performance characteristics of their own, and hide dynamic memory allocations (also see @Ian's answer). So, bottom line is: In general, there's no difference in the possible performance achievable using either C or C++. But for really performance-critical code, people often prefer using C because there are way less possible hidden performance penalties.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/238162", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/129364/" ] }
238,176
If a Square is a type of Rectangle, than why can't a Square inherit from a Rectangle? Or why is it a bad design? I have heard people say: If you made Square derive from Rectangle, then a Square should be usable anywhere you expect a rectangle What is the problem here? And why would Square be usable anywhere you expect a rectangle? It would only be usable if we create the Square object, and if we override the SetWidth and SetHeight methods for Square than why would there be any issue? If you had SetWidth and SetHeight methods on your Rectangle base class and if your Rectangle reference pointed to a Square, then SetWidth and SetHeight don't make sense because setting one would change the other to match it. In this case Square fails the Liskov Substitution Test with Rectangle and the abstraction of having Square inherit from Rectangle is a bad one. Can someone explain the above arguments? Again, if we over-ride SetWidth and SetHeight methods in Square, wouldn't it resolve this issue? I have also heard/read: The real issue is that we are not modeling rectangles, but rather "reshapable rectangles" i.e., rectangles whose width or height can be modified after creation (and we still consider it to be the same object). If we look at the rectangle class in this way, it is clear that a square is not a "reshapable rectangle", because a square cannot be reshaped and still be a square (in general). Mathematically, we don't see the problem because mutability doesn't even make sense in a mathematical context Here I believe "re-sizeable" is the correct term. Rectangles are "re-sizeable" and so are squares. Am I missing something in the above argument? A square can be re-sized like any rectangle.
Basically we want things to behave sensibly. Consider the following problem: I am given a group of rectangles and I want to increase their area by 10%. So what I do is I set the length of the rectangle to 1.1 times what it was before. public void IncreaseRectangleSizeByTenPercent(IEnumerable<Rectangle> rectangles) { foreach(var rectangle in rectangles) { rectangle.Length = rectangle.Length * 1.1; } } Now in this case, all of my rectangles now have their length increased by 10%, which will increase their area by 10%. Unfortunately, someone has actually passed me a mixture of squares and rectangles, and when the length of the rectangle was changed, so was the width. My unit tests pass because I wrote all my unit tests to use a collection of rectangles. I now have introduced a subtle bug into my application which can go unnoticed for months. Worse still, Jim from accounting sees my method and writes some other code which uses the fact that if he passes squares into my method, that he gets a very nice 21% increase in size. Jim is happy and nobody is any wiser. Jim gets promoted for excellent work to a different division. Alfred joins the company as a junior. In his first bug report, Jill from Advertising has reported that passing squares to this method results in a 21% increase and wants the bug fixed. Alfred sees that Squares and Rectangles are used everywhere in the code and realises that breaking the inheritance chain is impossible. He also does not have access to Accounting's source code. So Alfred fixes the bug like this: public void IncreaseRectangleSizeByTenPercent(IEnumerable<Rectangle> rectangles) { foreach(var rectangle in rectangles) { if (typeof(rectangle) == Rectangle) { rectangle.Length = rectangle.Length * 1.1; } if (typeof(rectangle) == Square) { rectangle.Length = rectangle.Length * 1.04880884817; } } } Alfred is happy with his uber hacking skills and Jill signs off that the bug is fixed. Next month nobody gets paid because Accounting was dependent on being able to pass squares to the IncreaseRectangleSizeByTenPercent method and getting an increase in area of 21%. The entire company goes into "priority 1 bugfix" mode to track down the source of the issue. They trace the problem to Alfred's fix. They know that they have to keep both Accounting and Advertising happy. So they fix the problem by identifying the user with the method call like so: public void IncreaseRectangleSizeByTenPercent(IEnumerable<Rectangle> rectangles) { IncreaseRectangleSizeByTenPercent( rectangles, new User() { Department = Department.Accounting }); } public void IncreaseRectangleSizeByTenPercent(IEnumerable<Rectangle> rectangles, User user) { foreach(var rectangle in rectangles) { if (typeof(rectangle) == Rectangle || user.Department == Department.Accounting) { rectangle.Length = rectangle.Length * 1.1; } else if (typeof(rectangle) == Square) { rectangle.Length = rectangle.Length * 1.04880884817; } } } And so on and so forth. This anecdote is based on real-world situations that face programmers daily. Violations of the Liskov Substitution principle can introduce very subtle bugs that only get picked up years after they're written, by which time fixing the violation will break a bunch of things and not fixing it will anger your biggest client. There are two realistic ways of fixing this problem. The first way is to make Rectangle immutable. If the user of Rectangle cannot change the Length and Width properties, this problem goes away. If you want a Rectangle with a different length and width, you create a new one. Squares can inherit from rectangles happily. The second way is to break the inheritance chain between squares and rectangles. If a square is defined as having a single SideLength property and rectangles have a Length and Width property and there is no inheritance, it's impossible to accidentally break things by expecting a rectangle and getting a square. In C# terms, you could seal your rectangle class, which ensures that all Rectangles you ever get are actually Rectangles. In this case, I like the "immutable objects" way of fixing the problem. The identity of a rectangle is its length and width. It makes sense that when you want to change the identity of an object, what you really want is a new object. If you lose an old customer and gain a new customer, you don't change the Customer.Id field from the old customer to the new one, you create a new Customer . Violations of the Liskov Substitution principle are common in the real world, mostly because a lot of code out there is written by people who are incompetent/ under time pressure/ don't care/ make mistakes. It can and does lead to some very nasty problems. In most cases, you want to favour composition over inheritance instead.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/238176", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/125730/" ] }
238,186
Whether or not a value is null could be checked implicitly like this: assertThat(value).isEqualTo("something"); Or it could be checked explicitly: assertThat(value).isNotNull(); assertThat(value).isEqualTo("something"); The latter makes the expectations a bit more clear, but it blows up the test code. Is there a good reason to prefer the second approach over the first?
Basically we want things to behave sensibly. Consider the following problem: I am given a group of rectangles and I want to increase their area by 10%. So what I do is I set the length of the rectangle to 1.1 times what it was before. public void IncreaseRectangleSizeByTenPercent(IEnumerable<Rectangle> rectangles) { foreach(var rectangle in rectangles) { rectangle.Length = rectangle.Length * 1.1; } } Now in this case, all of my rectangles now have their length increased by 10%, which will increase their area by 10%. Unfortunately, someone has actually passed me a mixture of squares and rectangles, and when the length of the rectangle was changed, so was the width. My unit tests pass because I wrote all my unit tests to use a collection of rectangles. I now have introduced a subtle bug into my application which can go unnoticed for months. Worse still, Jim from accounting sees my method and writes some other code which uses the fact that if he passes squares into my method, that he gets a very nice 21% increase in size. Jim is happy and nobody is any wiser. Jim gets promoted for excellent work to a different division. Alfred joins the company as a junior. In his first bug report, Jill from Advertising has reported that passing squares to this method results in a 21% increase and wants the bug fixed. Alfred sees that Squares and Rectangles are used everywhere in the code and realises that breaking the inheritance chain is impossible. He also does not have access to Accounting's source code. So Alfred fixes the bug like this: public void IncreaseRectangleSizeByTenPercent(IEnumerable<Rectangle> rectangles) { foreach(var rectangle in rectangles) { if (typeof(rectangle) == Rectangle) { rectangle.Length = rectangle.Length * 1.1; } if (typeof(rectangle) == Square) { rectangle.Length = rectangle.Length * 1.04880884817; } } } Alfred is happy with his uber hacking skills and Jill signs off that the bug is fixed. Next month nobody gets paid because Accounting was dependent on being able to pass squares to the IncreaseRectangleSizeByTenPercent method and getting an increase in area of 21%. The entire company goes into "priority 1 bugfix" mode to track down the source of the issue. They trace the problem to Alfred's fix. They know that they have to keep both Accounting and Advertising happy. So they fix the problem by identifying the user with the method call like so: public void IncreaseRectangleSizeByTenPercent(IEnumerable<Rectangle> rectangles) { IncreaseRectangleSizeByTenPercent( rectangles, new User() { Department = Department.Accounting }); } public void IncreaseRectangleSizeByTenPercent(IEnumerable<Rectangle> rectangles, User user) { foreach(var rectangle in rectangles) { if (typeof(rectangle) == Rectangle || user.Department == Department.Accounting) { rectangle.Length = rectangle.Length * 1.1; } else if (typeof(rectangle) == Square) { rectangle.Length = rectangle.Length * 1.04880884817; } } } And so on and so forth. This anecdote is based on real-world situations that face programmers daily. Violations of the Liskov Substitution principle can introduce very subtle bugs that only get picked up years after they're written, by which time fixing the violation will break a bunch of things and not fixing it will anger your biggest client. There are two realistic ways of fixing this problem. The first way is to make Rectangle immutable. If the user of Rectangle cannot change the Length and Width properties, this problem goes away. If you want a Rectangle with a different length and width, you create a new one. Squares can inherit from rectangles happily. The second way is to break the inheritance chain between squares and rectangles. If a square is defined as having a single SideLength property and rectangles have a Length and Width property and there is no inheritance, it's impossible to accidentally break things by expecting a rectangle and getting a square. In C# terms, you could seal your rectangle class, which ensures that all Rectangles you ever get are actually Rectangles. In this case, I like the "immutable objects" way of fixing the problem. The identity of a rectangle is its length and width. It makes sense that when you want to change the identity of an object, what you really want is a new object. If you lose an old customer and gain a new customer, you don't change the Customer.Id field from the old customer to the new one, you create a new Customer . Violations of the Liskov Substitution principle are common in the real world, mostly because a lot of code out there is written by people who are incompetent/ under time pressure/ don't care/ make mistakes. It can and does lead to some very nasty problems. In most cases, you want to favour composition over inheritance instead.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/238186", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/63946/" ] }
238,295
My understanding is that a Daily Scrum meeting should be very quick, hosted in a friendly way and that it requires all the team members present. Because it is objective is to have everyone up to date with what everybody else is doing. I like Scrum Daily Meetings that are held like that. In my latest project our Daily Scrums are more like a Status Update meeting. Although the position is that we are holding Scrums and practicing proper Agile. We are a distributed team, in 2 different countries, and the people that is in the same Country are not in the same office. As consequence we have virtual Scrums. The problem is that our meetings always start on time, many people calls before the actual start time, so they actually start at the very first second of the meeting. Without any tolerance for small delays. For example the last time we were on the phone and the person coordinating the meeting checked if everyone was on, and we said one of our team members was not on yet but he was calling. And I was told to start sharing without waiting for my team member. Also everyone has a lot of meetings, and sometimes they are back to back with the Scrum meeting, so it is understandable if they arrive during the first or second minute of the meeting. Is that the normal for teams practicing Daily Scrums? It is the first time that happens to me. I can not find any bibliography directly about it. Although the presence of all team members is stressed, it is stressed also that the meetings should always start at the same time. But I imagine there can be a small delay tolerance. I even read on a blog someone suggesting that the Scrum Master can place penalties if someone arrives "5 seconds" late. I thought the Scrums were supposed to be friendly, and having a penalty like that seems counter productive. What is the recommended approach in a situation like this?
Like with any agile practice, scrum teams can decide this for themselves. If it bothers you, you should bring it up in your retrospective and try to come to a solution that everyone is happy with. Perhaps other team members feel the same way, but think that's "just how scrum is done." That being said, in my scrum meetings I start on the second unless three or more people are missing. For a meeting that everyone is required to attend every day, I feel it is disrespectful of everyone's time to do otherwise. When I'm the one that shows up late, my team starts without me. If we have time at the end, we go back to the tasks of people who came late. I have been less strict about punctuality in the past, and what happened is people who showed up on time got tired of their time being wasted, so they started trying to guess when the meeting would actually start, and show up then instead, which had a snowball effect. For a daily meeting, it's not the end of the world if someone occasionally misses part of it. Hopefully it isn't the only communication you're doing throughout the day.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/238295", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/92720/" ] }
238,325
As I am currently struggling with learning WCF for a project at work, for the past several days I have been looking at online tutorials and examples on the best way to make a WCF client; and today, when I told my boss I was having a hard time finding good tutorials because some of them would not go into details with their examples (for instance showing a piece of code that works but not explaing why exactly, or making an example with just a few lines instead of at a larger scale), he told me that I should not try to understand them completely, because there are times where a code will just do what it does and I should leave it at that (he gave me the example that when doing calculus, for instance, we use formulas that we don't know how they were conceived or how they work, we just know that they do and we use them). And this bothered me because I have always been told that it's really important to understand your code, and that just copying and pasting without knowing HOW it works is practically a sin; which is why I always take my time in understanding something before moving forward with it. And it's not like I'm trying to understand how a particular class works at an assembly language level, I just want to know why a set of instructions does the trick, and why another one doesn't, or why both do and under what circumstances. But my boss tells me that I will end up wasting time obsessing over these little details, and I should just skip it. So my question is, is he right? Is it okay to understand your code only to a certain extent and keep going, and have I only been obsessing over little things that don't matter?
The sole purpose of software abstractions is to hide functional details. Were it not for those abstractions, it would not be possible to progress beyond a certain point in computing, because systems would simply collapse under the weight of their own complexity. Human brains can only comprehend so much information at once. Consider what happens when you write a method. When you write a method, what you are doing is hiding some bit of software functionality behind a method call. Once that method is written and proven to work by writing unit tests about that method, you no longer have to think about what's inside that method unless you need to change something about its implementation. Large software systems are built upon many layers of these abstractions. You have a microcode layer in the processor, machine code, address and data buses, language compilers, object-orientation, data structures, domain-specific languages, and so on. You have libraries built on top of other libraries, which in turn are built on top of an operating system. You don't fully understand how any of that stuff works either, but you're still able to successfully write computer programs that do something useful. That said... You can't just copy/paste code without understanding how it works . The person who tries to make his program work by copy pasting code that he doesn't understand is setting himself up to fail. You need to understand the code you write. That doesn't mean that you have to know how WCF works internally, but you do need to know what the purpose of WCF is, how to write code that properly interfaces it, and how the code you write functions in concert with it. Many copy/paste programmers don't even have a decent understanding of the programming language they are copy/pasting, let alone in-depth knowledge of the libraries they are using. So you need to have some decent skills, and you need to understand the code you write (or paste) that calls WCF. But you don't need to have in-depth knowledge about how WCF works internally. Similarly, programmers who think they can stitch together a program by randomly wiring up software patterns are missing the point. We call those folks Cargo Cult programmers .
{ "source": [ "https://softwareengineering.stackexchange.com/questions/238325", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/129513/" ] }
238,413
Our team consits of junior and senior developers. Problem I am facing is with the code written by seniors. They are not following MINIMUM coding standards. I am also still learning but I wouldn't declare the inline style or give improper names for variables and controls. I tried to mention it indirectly but they were like we can take it up later and since its agile, we should move on. But my problem is when I start working on my task, it makes me feel bad and I would focus more on refactor rather than on my task. I am not sure how to deal with this situation. PS: I am sure this question has been asked many times. But any advise is much appreciated.
A couple of things. Don't assume that your seniors don't know what they're doing. They may have very good reasons why they made the decisions that they did; ask them why (in a non-argumentative way). Code that is already written, backed by unit tests, and declared functionally complete by your superiors can be safely ignored. That's what you should do with it: ignore it, until it becomes necessary to further maintain it, for whatever reason. Based on your cursory description, it sounds like your team is under significant time pressure. Compromises are made under those conditions; that's just the way it is. Choose your battles wisely. Only fight the ones that need fighting.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/238413", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/82254/" ] }
238,729
Say, I am using a simple recursive algo for fibonacci , which would be executed as: fib(5) -> fib(4)+fib(3) | | fib(3)+fib(2)| fib(2)+fib(1) and so on Now, the execution will still be sequential. Instead of that, how would I code this so that fib(4) and fib(3) are calculated by spawning 2 separate threads, then in fib(4) , 2 threads are spawned for fib(3) and fib(2) . Same for when fib(3) is split to fib(2) and fib(1) ? (I'm aware that Dynamic programming would be a much better approach for Fibonacci, just used it as an easy example here) (if someone could share a code sample in C\C++\C# as well, that would be ideal)
This is possible but a really bad idea; work out the number of threads you will spawn when calculating fib(16), say, and then multiply that by the cost of a thread. Threads are insanely expensive; doing this for the task you describe is like hiring a different typist to type each character of a novel. That said, recursive algorithms are often good candidates for parallelization, particularly if they split the job into two smaller jobs that can be performed independently. The trick is to know when to stop parallelizing. In general, you want to parallelize only "embarrassingly parallel" tasks. That is, tasks which are computationally expensive and can be computed independently . Many people forget about the first part. Threads are so expensive that it only makes sense to make one when you have a huge amount of work for them to do, and moreover, that you can devote an entire processor to the thread . If you have 8 processors then making 80 threads is going to force them to share the processor, slowing each one of them down tremendously. You do better to make only 8 threads and let each have 100% access to the processor when you have an embarrassingly parallel task to perform. Libraries like the Task Parallel Library in .NET are designed to automatically figure out how much parallelism is efficient; you might consider researching its design if this subject interests you.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/238729", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/8147/" ] }
238,782
This question is about whether to make a nested class in Java to be a static nested class or an inner nested class. I searched around here and on Stack Overflow, but couldn't really find any questions regarding the design implications of this decision. The questions I found are asking about the difference between static and inner nested classes, which is clear to me. However, I have not yet found a convincing reason to ever use a static nested class in Java -- with the exception of anonymous classes, which I do not consider for this question. Here's my understanding of the effect of using static nested classes: Less coupling: We generally get less coupling, as the class cannot directly access its outer class's attributes. Less coupling generally means better code quality, easier testing, refactoring, etc. Single Class : The class loader need not take care of a new class each time, we create an object of the outer class. We just get new objects for the same class over and over. For an inner class, I generally find that people consider access to the outer class's attributes as a pro. I beg to differ in this regard from a design point of view, as this direct access means we have a high coupling and if we ever want to extract the nested class into its separate top-level class, we can only do so after essentially turning it into a static nested class. So my question comes down to this: Am I wrong in assuming that the attribute access available to non-static inner classes leads to high coupling, hence to lower code quality, and that I infer from this that (non-anonymous) nested classes should generally be static? Or in other words: Is there a convincing reason why one would prefer a nested inner class?
Joshua Bloch in Item 22 of his book "Effective Java Second Edition" tells when to use which kind of nested class and why. There are some quotes below: One common use of a static member class is as a public helper class, useful only in conjunction with its outer class. For example, consider an enum describing the operations supported by a calculator. The Operation enum should be a public static member class of the Calculator class. Clients of Calculator could then refer to operations using names like Calculator.Operation.PLUS and Calculator.Operation.MINUS . One common use of a nonstatic member class is to define an Adapter that allows an instance of the outer class to be viewed as an instance of some unrelated class. For example, implementations of the Map interface typically use nonstatic member classes to implement their collection views , which are returned by Map ’s keySet , entrySet , and values methods. Similarly, implementations of the collection interfaces, such as Set and List , typically use nonstatic member classes to implement their iterators: // Typical use of a nonstatic member class public class MySet<E> extends AbstractSet<E> { ... // Bulk of the class omitted public Iterator<E> iterator() { return new MyIterator(); } private class MyIterator implements Iterator<E> { ... } } If you declare a member class that does not require access to an enclosing instance, always put the static modifier in its declaration, making it a static rather than a nonstatic member class.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/238782", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/16375/" ] }
238,786
Will the code int a = ((1 + 2) + 3); // Easy to read run slower than int a = 1 + 2 + 3; // (Barely) Not quite so easy to read or are modern compilers clever enough to remove/optimize "useless" parentheses. It might seems like a very tiny optimization concern, but choosing C++ over C#/Java/... is all about optimizations (IMHO).
The compiler does not actually ever insert or remove parentheses; it just creates a parse tree (in which no parentheses are present) corresponding to your expression, and in doing so it must respect the parentheses you wrote. If you fully parenthesise your expression then it will also be immediately clear to the human reader what that parse tree is; if you go to the extreme of putting in blatantly redundant parentheses as in int a = (((0))); then you will be causing some useless stress on the neurons of the reader while also wasting some cycles in the parser, without however changing the resulting parse tree (and therefore the generated code) the slightest bit. If you don't write any parentheses, then the parser must still do its job of creating a parse tree, and the rules for operator precedence and associativity tell it exactly which parse tree it must construct. You might consider those rules as telling the compiler which (implicit) parentheses it should insert into your code, although the parser doesn't actually ever deal with parentheses in this case: it just has been constructed to produce the same parse tree as if parentheses were present in certain places. If you place parentheses in exactly those places, as in int a = (1+2)+3; (associativity of + is to the left) then the parser will arrive at the same result by a slightly different route. If you put in different parentheses as in int a = 1+(2+3); then you are forcing a different parse tree, which will possibly cause different code to be generated (though maybe not, as the compiler may apply transformations after building the parse tree, as long as the effect of executing the resulting code would never be different for it). Supposing there is a difference in the resuting code, nothing in general can be said as to which is more efficient; the most important point is of course that most of the time the parse trees do not give mathematically equivalent expressions, so comparing their execution speed is beside the point: one should just write the expression the gives the proper result. So the upshot is: use parentheses as needed for correctness, and as desired for readability; if redundant they have no effect at all on execution speed (and a negligible effect on compile time). And none of this has anything to do with optimisation , which comes along way after the parse tree has been built, so it cannot know how the parse tree was constructed. This applies without change from the oldest and stupidest of compilers to the smartest and most modern ones. Only in an interpreted language (where "compile time" and "execution time" are coincident) could there possibly be a penalty for redundant parentheses, but even then I think most such languages are organised so that at least the parsing phase is done only once for each statement (storing some pre-parsed form of it for execution).
{ "source": [ "https://softwareengineering.stackexchange.com/questions/238786", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/124438/" ] }
239,036
I was watching this video on the maximum and minimum values of signed integers. Take an example of a positive signed value - 0000 0001 The first bit denotes that the number is positive and the last 7 bits are the number itself. So it is easily interpreted as +1. Now take an example of a negative signed value - 1000 0000 which comes out to be -8. Okay, the computer can understand that it is a negative value because of the first bit but how the hell does it understand that 000 0000 means -8? In general, how are negative signed values stored/interpreted in a computer?
The C standard doesn't mandate any particular way of representing negative signed numbers. In most implementations that you are likely to encounter, negative signed integers are stored in what is called two's complement . The other major way of storing negative signed numbers is called one's complement . The two's complement of an N-bit number x is defined as 2^N - x . For example, the two's complement of 8-bit 1 is 2^8 - 1 , or 1111 1111 . The two's complement of 8-bit 8 is 2^8 - 8 , which in binary is 1111 1000 . This can also be calculated by flipping the bits of x and adding one. For example: 1 = 0000 0001 ~1 = 1111 1110 (1's complement) ~1 + 1 = 1111 1111 (2's complement) -1 = 1111 1111 21 = 0001 0101 ~21 = 1110 1010 ~21 + 1 = 1110 1011 -21 = 1110 1011 The one's complement of an N-bit number x is defined as x with all its bits flipped, basically. 1 = 0000 0001 -1 = 1111 1110 21 = 0001 0101 -21 = 1110 1010 Two's complement has several advantages over one's complement. For example, it doesn't have the concept of 'negative zero', which for good reason is confusing to many people. Addition, multiplication and subtraction work the same with signed integers implemented with two's complemented as they do with unsigned integers as well.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/239036", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/130296/" ] }
239,158
So it is generally accepted that top tier programmers can produce an order of magnitude more/better code than their more average peers. It's also generally accepted that the rate of errors made in code is relatively constant for programmers. Instead, it tends to be impacted by the processes used when writing the code and after the code is written . (As I understand it) Humans tend to make mistakes at a fairly constant rate - better programmers just notice more of them and are quicker to fix them. Note that both of above assertions come from Code Complete by Steve McConnell - so it's not a matter of differing perspectives. So I've started to see this recently in my code. I can hammer out about 4-5x the amount of code as many of my peers (measured by story points estimated by the team), with higher quality (based on performance metrics and number of changes made after check-in). But I still make mistakes. Between better unit tests, a better understanding of what the code is doing, and a better eye for issues when doing code reviews I'm not producing 4-5x the number of bugs. But I'm still producing about twice as many bugs found by QA as other developers on my team. As you might imagine, this causes some problems with non-technical folks doing metric measurements (read: my boss). I've tried to point out that I'm producing bugs at half the rate of my peers (and fix twice as many), but it's a hard sell when there's graphs saying I produce twice as many bugs. So, how to deal with the fact that increased productivity will lead to an increased number of bugs?
I think you're mixing your concerns. And there's nothing on your side that you need to change. Productivity is a hint at how quickly a project will be completed. Project managers and everybody else like to know when the project will deliver. Higher or faster productivity means we'll see the project deliver sooner. Rate of bugs isn't tied to productivity but rather to the size of the project. For example, you may have N bugs per Y lines of code. There is nothing within that metric that says (or cares!) how quickly those lines of code are written. To tie that together, if you have higher productivity, yes, you'll "see" the bugs being written more quickly. But you were going to have that number of bugs anyway since it's tied to the size of the project. If anything, higher productivity means you'll have more time at the end of the project to hunt those bugs down or the developer will be faster in finding the bugs they created. 1 To address the more personal aspects of your question. If your boss is looking strictly at the number of bugs you produce as opposed to the rate of bugs you produce, an educational session is in order. Number of bugs created is meaningless without a backing rate. To take that example to the extreme, please tell your boss I want double your salary. Why? I have created absolutely no bugs on your project and I am therefore a much superior programmer than you. What? He's going to have a problem that I haven't produced a single line of code to benefit your project? Ah. Now we have understanding of why rate is important. It sounds like your team has the metrics to evaluate bugs per story point. If nothing else, it's better than being measured by raw number of bugs created. Your best developers should be creating more bugs because they're writing more code. Have your boss throw out that graph or at least throw another series behind it showing how many story points (or whatever business value you measure) alongside the number of bugs. That graph will tell a more accurate story. 1 This particular comment has attracted far more attention than it was intended to. So let's be a bit pedantic (surprise, I know) and reset our focus on this question. The root of this question is about a manager looking at the wrong thing(s). They are looking at raw bug totals when they should be looking at generation rate versus number of tasks completed. Let's not obsess over measuring against "lines of code" or story points or complexity or whatever. That's not the question at hand and those worries distract us from the more important question. As laid out in the links by the OP, you can predict a certain number of bugs in a project purely by the size of the project alone. Yes, you can reduce this number of bugs through different development and testing techniques. Again, that wasn't the point of this question. To understand this question, we need to accept that for a given size project and development methodology, we'll see a given number of bugs once development is "complete." So let's finally get back to this comment that a few completely misunderstood. If you assign comparably sized tasks to two developers, the developer with a higher rate of productivity will complete their task before the other. The more productive developer will therefore have more time available at the end of the development window. That "extra time" (as compared to the other developer) can be used for other tasks such as working on the defects that will percolate through a standard development process. We have to take the OP at their word that they are more productive than other developers. Nothing within those claims implies that the OP or other more productive developers are being slipshod in their work. Pointing out that there would be less bugs if they spent more time on the feature or suggesting that debugging isn't part of this development time misses what has been asked. Some developers are faster than others and produce comparable or better quality work. Again, see the links that the OP lays out in their question.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/239158", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/51654/" ] }
240,454
C++11 features the new auto type declaration, allowing you to work with an object without ever knowing its actual type. I use only strongly typed languages - C++, Delphi (Object Pascal), Go, etc and I feel uncomfortable (guilty?) using auto , (or for example the short variable declaration - x:=1 - in Go.) It feels like a hack to me - I use strongly typed languages because they ensure that you know what type you're using. (With the exception of the abuse of untyped pointers.) Although certainly I appreciate the benefits: Proper type names involving iterators, templates, smart pointers etc can get very lengthy and a bit difficult to determine and declare explicitly, or to parse when reading. Granted, once you know the proper type name, you can "typedef it", but sometimes getting it right the first time is time consuming and not always so easy. Or, imagine you that inherit a large, complex project that you need to modify, and every variable possible in that project is declared using auto - you're going to have to do a lot more work to understand that code-base than if everything was declared using explicit types. So, what are some guidelines on when to use auto and when to sweat it out with full and proper type names? I am currently reading Stroustrup's A Tour of C++ and he himself there in Chapter 1 advocates using auto in situations when you know auto will "get it right": We use auto where we don’t have a specific reason to mention the type explicitly. “Specific reasons” include: • The definition is in a large scope where we want to make the type clearly visible to readers of our code. • We want to be explicit about a variable’s range or precision (e.g., double rather than float). In the Advice section of Chapter 1 there he also warns: -Prefer the {}-initializer syntax for declarations with a named type; -Prefer the = syntax for the initialization in declarations using auto; This, because default initialization {} could result in an incorrect type initialization. Still, I feel a bit uncomfortable using auto . Can anyone perhaps give me some additional guidelines about the use of auto , and/or debunk my impression that auto is a hack of sorts and really should be avoided in favor of determining the proper type and then using a typedef ?
No, you should not feel uncomfortable using auto . Just use it in situations where the type is obvious, or where no one is going to care about it A classic example (IMO) of where auto is handy: std::vector<sometype> vec ... ... //some code ... ... for(auto iter = vec.begin(); iter != vec.end(); ++iter) { //something here } Nobody really cares about the details of the iterator variable ( std::vector<sometype>::iterator iter ), only that it is an iterator. Explicitly specifying the type just adds visual noise. Often, without auto , people will create typedefs for commonly used iterators, which can obscure issues and lead to some bizzare error messages if you use the wrong typedef.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/240454", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/26621/" ] }
240,512
I work at a mid-sized company (150ish employees, ~10 size engineering team), and most of my projects involve interfacing with lab equipment (oscilloscopes, optical spectrum analyzers, etc) for the purpose of semi-automated test applications. I have run into a few different scenarios where I am unable to efficiently troubleshoot or test new code because I no longer or never had the hardware setup available to me. Example 1: A setup where 10-20 "burn-in" processes are run independently using a bench top type sensor - I was able to obtain one such sensor for testing and could occasionally steal a second for simulating all of the facets of interfacing to multiple devices (searching, connecting, streaming, etc). Eventually a bug showed up (and ultimately ended up being in the device firmware & drivers) that was very difficult to reproduce accurately with only one unit, but hit near "show stopper" levels when 10-20 of these devices were in use simultaneously. This is still unsolved and is ongoing. Example 2: A test requiring an expensive optical spectrum analyzer as its core component. The device is pretty old, legacy according to the manufacturer who was acquired by a larger company and basically dissolved, and its only documentation was a long winded (and uninformative) document that seems poorly translated. During initial development I was able to keep the device at my desk, but now its tied up, both physically and in schedule during its 24/7 multi-week tests. When bugs show up related or unrelated to the device, I often need to go through the trouble of testing code external to the application and fitting it in, or writing code blindly and attempting to squeeze in some testing time in between runs, as much of the program logic requires the OSA and the rest of the test hardware to be in place. I guess my question is how should I approach this? I could potentially spend time developing device simulators, but figuring that into the development estimate will balloon it more than most would probably appreciate. It may not accurately reproduce all issues either, and it's pretty rare to see the same equipment used twice around here. I could get better at unit testing...etc...I could also be loud about the issue and make others understand that temporary delays will be required, not much more than a headache for Research and Development but usually a perceived as a joke when pitched to manufacturing.
Management understands it will take longer to develop and maintain software when you don't have full access to test hardware. You need to take this into account when doing your estimates. Part of the acceptance criteria for putting your software into production should be that you have a way to maintain the software under most circumstances without stopping manufacturing. If you're practicing TDD, this should happen pretty much naturally. I used to write software for $60 million aircraft. Obviously, there's a high degree of reliability required, and they are reluctant to give every developer one for their desk. We basically had 5 levels of test environments, with more of the real hardware at each level, up to a full aircraft. I estimate 95% of our software could be developed and debugged only with emulators and unit tests. 95% of the remaining features could be worked on the next level up, and so on. Try to set up similar levels of test environments for yourself. You can't expect to never need access to the real hardware, but if you've set it up so you can't work on your software's GUI without the hardware available, you're wasting valuable time on an expensive resource (not to mention you have some coupling issues with your architecture). Consider that other developers likely have the same issues as you. I would ask the hardware vendor if they already have emulators or other test resources available. You also need to change your mindset somewhat if you only have limited access to hardware. Rather than trying to debug your application in the normal serial manner, you often need to write code specifically for the purpose of gathering information as quickly as possible. For example, perhaps you have a bug and you can think of 10 possible causes. If the only time you can get on a machine is the 15 minutes while the operator is on break, write a Short, Self Contained, Correct (Compilable), Example that triggers the bug and write 10 automated tests using that SSCCE to test your theories and log a bunch of data. Afterward back at your desk you can take as long as you need to sift through the data for your next attempt. The idea is to maximize the utility of your limited time with the hardware.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/240512", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/115702/" ] }
240,568
From the Code Complete book comes the following quote: "Put the normal case after the if rather than after the else " Which means that exceptions/deviations from the standard path should be put in the else case. But The Pragmatic Programmer teaches us to "crash early" (p. 120). Which rule should I follow?
"Crash early" is not about which line of code comes earlier textually. It tells you to detect errors in the earliest possible step of processing , so that you don't inadvertently make decisions and computations based on already faulty state. In an if / else construct, only one of the blocks is executed, so neither can be said to constitute an "earlier" or "later" step. How to order them is therefore a question of readability, and "fail early" doesn't enter into the decision.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/240568", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/11072/" ] }
240,598
I've developed for several web application projects for the last 3 years, both personal and at work, and I can't seem to figure out whether it's possible for at least some business logic not ending up in the view layer of the application. In most cases there will be problems like "If the user has selected option x then the application must enable him to supply info for y, if not then s/he should supply info z". Or do some AJAX operation which should apply some changes to the model but NOT commit them until the user has explicitly requested so. These are some of the simplest problems I've encountered and I can't figure out how it's possible to avoid complex logic in the view. Most of the books I've read describing MVC usually showcase some very trivial examples, like CRUD operations that just update data on the server and display them, but CRUD is not the case on most rich applications. Is it possible to achieve having a view with no business logic at all?
Is it possible to achieve having a view with no business logic at all? I find this a deceptively hard question to answer. (Thought-provoking question!) Theoretically, yes, depending on what we define as business logic. In practice, strict separation becomes a lot harder, and maybe even undesirable. Separation of concerns is a great way to think about building software: it provides you with ideas about where to place code, and it gives maintainers a good idea about where to look for the code. I'll argue that it's basically impossible for humans to build working software without separation of concerns. We need this. But, as with all things, there are trade-offs. The best conceptual location may not be the best location for other reasons. Maybe there's too much load on your web server, so you add some javascript to your web pages to catch easy input errors before they hit your server; now you have some business logic in your view. The view itself, on its own, has no value without the business logic. And to be effective in use and display, implicitly or explicitly, the view will have some knowledge of the business processes going on behind it. We can limit that amount of knowledge, and we can cordon off parts of it, but practical considerations will often force us to 'break' separation of concerns.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/240598", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/111500/" ] }
241,104
Up until now I don't know the KEY differences between these three. When someone asks me about this, I only tell them that C# is a programming language, HTML and XML are Markup Languages, and JavaScript and VBScript are scripting languages. But what are the key differences that distinguish them from one another?
Let me attempt to find a dividing line between these three types of language. Of course, there will be numerous exceptions and counterexamples, since this is just my opinion. A markup language is used to control the presentation of data, like "represent these user names as a bullet list or as a table". A scripting language is used to mediate between programs in order to generate data. This is specially true of shell scripting languages like bash, but if you reflect about it, also Python or Perl came from the need to accomplish tasks in UNIX without writing a program in C. The program that you control most of the time in those languages is the interpreter of the language itself , which accomplishes general tasks for you. Other typical programs you interact with are database servers, or web servers. Going back to the user list metaphor, in a scripting language you ask the database "give me all user names", then ask the web server "send this user list to this requester". A programming language is used to transform data . It does so by creating CPU instructions that rewrite the input data into the output; hopefully, the desired output. Examples of transforming data is to compute a sum out of a number of addends, or solving a system of differential equations from a set of conditions, or writing and reading from a tree-like structure in a consistent manner given a sequence of possibly simultaneous queries. Going back to the user list metaphor, in a programming language you write how to traverse a table of records, extract from each record the "name" field, and return all of them to the requester. Note that scripting languages are a subset of programming languages i.e. a language may be both "scripting" and "programming": Python is regularly used to "mediate between programs", and also to "transform data". There are other languages like Java which are seldom used to "mediate between programs", not because this is impossible but because they are not designed to make this easy. The key feature of a scripting language is that it can orchestrate other programs, just like a script gives the cue to an actor to start his part.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/241104", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/105867/" ] }
241,109
I am trying to improve the backup situation for my application. I have a Django application and MySQL database. I read an article suggesting backing up the database in Git. On the one hand I like it, as it will keep a copy of the data and the code in sync. But Git is designed for code, not for data. As such it will be doing a lot of extra work diffing the MySQL dump every commit, which is not really necessary. If I compress the file before storing it, will git still diff the files? (The dump file is currently 100MB uncompressed, 5.7MB when bzipped.) Edit: the code and database schema definitions are already in Git, it is really the data I am concerned about backing up now.
Before you lose any data, let me try to introduce a sysadmin perspective to this question. There is only one reason we create backups: to make it possible to restore when something goes wrong, as it invariably will. As such, a proper backup system has requirements that go far beyond what git can reasonably handle. Here are some of the issues I can foresee with trying to backup your database in git: The repository will grow dramatically with every "backup". Since git stores entire objects (albeit compressed) and then diffs them later (e.g. when you run git gc ) , and keeps history forever , you will have a very large amount of data stored that you don't actually need or even want. You might need to limit the amount or retention period of backups you do to save disk space or for legal reasons, but it's difficult to remove old revisions from a git repo without a lot of collateral damage. Restoring is limited to points in time that you have stored in the repository, and since the data is so large, going back more than a trivial amount of time may be slow. A backup system designed for the purpose limits the amount of data stored while potentially providing more granularity, and provides faster restores, reducing downtime in the event of a disaster. Database-aware backup solutions ( example ) can also provide continuous backup, ensuring that not a single transaction is lost. Commits are likely to be slow as well, and get slower as the database grows. Remember that git is essentially a key-value data store mapped onto a filesystem , and thus is subject to the performance characteristics of the underlying filesystem. It is possible for this length of time to eventually exceed the backup interval, and at that point you can no longer meet your SLA. Proper backup systems also take longer to backup as the data grows, but not nearly so dramatically, since they will automatically manage their own size based on the retention policy you will have configured. Despite the fact that there are apparently several interesting things you can do with a database dump if you put it into git, overall I can't recommend it for the purpose of keeping backups. Especially since backup systems are widely available (and many are even open source) and work much better at keeping your data safe and making it possible to recover as quickly as possible.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/241109", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/68808/" ] }
241,309
When implementing the Builder Pattern, I often find myself confused with when to let building fail and I even manage to take different stands on the matter every few days. First some explanation: With failing early I mean that building an object should fail as soon as an invalid parameter is passed in. So inside the SomeObjectBuilder . With failing late I mean that building an object only can fail on the build() call that implicitely calls a constructor of the object to be built. Then some arguments: In favor of failing late: A builder class should be no more than a class that simply holds values. Moreover, it leads to less code duplication. In favor of failing early: A general approach in software programming is that you want to detect issues as early as possible and therefore the most logical place to check would be in the builder class' constructor, 'setters' and ultimately in the build method. What is the general concensus about this?
Let's look at the options, where we can place the validation code: Inside the setters in builder. Inside the build() method. Inside the constructed entity: it will be invoked in build() method when the entity is being created. Option 1 allows us to detect problems earlier, but there can be complicated cases when we can validate input only having the full context, thus, doing at least part of validation in build() method. Thus, choosing option 1 will lead to inconsistent code with part of validation being done in one place and another part being done in other place. Option 2 isn't significantly worse than option 1, because, usually, setters in builder are invoked right before the build() , especially, in fluent interfaces. Thus, it's still possible to detect a problem early enough in most cases. However, if the builder is not the only way to create an object, it will lead to duplication of validation code, because you'll need to have it everywhere where you create an object. The most logical solution in this case will be to put validation as close to created object as possible, that is, inside of it. And this is the option 3 . From SOLID point of view, putting validation in builder also violates SRP: the builder class already has responsibility of aggregating the data to construct an object. Validation is establishing contracts on its own internal state, it's a new responsibility to check the state of another object. Thus, from my point of view, not only it's better to fail late from design perspective, but it's also better to fail inside the constructed entity, rather than in builder itself. UPD: this comment reminded me of one more possibility, when validation inside the builder (option 1 or 2) makes sense. It does make sense if the builder has its own contracts on the objects it is creating. For example, assume that we have a builder that constructs a string with specific content, say, list of number ranges 1-2,3-4,5-6 . This builder may have a method like addRange(int min, int max) . The resulting string does not know anything about these numbers, neither it should have to know. The builder itself defines the format of the string and constraints on the numbers. Thus, the method addRange(int,int) must validate the input numbers and throw an exception if max is less than min. That said, the general rule will be to validate only the contracts defined by the builder itself.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/241309", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/115134/" ] }
242,657
Say Alice and Peter each have a 4GB USB flash memory stick. They meet and save on both sticks two files named alice_to_peter.key (2GB) and peter_to_alice.key (2GB) which contain randomly generated bits. They never meet again, but communicate electronically. Alice also maintains a variable called alice_pointer and Peter maintains variable called peter_pointer , both of which are initially set to zero. When Alice needs to send a message to Peter, she does (where n is the nth byte of the message): encrypted_message_to_peter[n] = message_to_peter[n] XOR alice_to_peter.key[alice_pointer + n] encrypted_payload_to_peter = alice_pointer + encrypted_message_to_peter alice_pointer += length(encrypted_message_to_peter) (and for maximum security, the used part of the key can be erased) Peter receives encrypted_payload_to_peter , reads alice_pointer stored at the beginning of message and does: message_to_peter[n] = encrypted_message_to_peter[n] XOR alice_to_peter.key[alice_pointer + n] And for maximum security, after reading of message also erase the used part of the key. - EDIT: In fact this step with this simple algorithm (without integrity check and authentication) decreases security, see Paŭlo Ebermann post below. When Peter needs to send a message to Alice they do the reverse, this time with peter_to_alice.key and peter_pointer . With this trivial schema they can send each day for the next 50 years 2GB / (50 * 365) = ~115kB of encrypted data in both directions. If they need more data to send, they could use larger keys, for example with today's 2TB HDs (1TB keys) it would be possible to exchange 60MB/day for the next 50 years! That's a lot of data in practice; for example, using compression it's more than hour of high quality voice communication. It seems to me that there is no way for an attacker to read the encrypted messages without the keys, because even if they have an infinitely fast computer, with brute force they can get every possible message under the limit, but this is an astronomical number of messages and the attacker doesn't know which of them is the actual message. Am I right? Is this communication scheme really absolutely secure? And if it is secure, does it have its own name? XOR encryption is well-known, but I'm looking for the name of this concrete practical application using large keys on both sides? I am humbly expecting that this application has been invented someone before me. :-) Note: If it's absolutely secure then it's amazing, because with today's low cost large storage devices, it would be much cheaper to do secure communication than with expensive quantum cryptography, and this has equivalent security! EDIT: I think this will be more practical in the future as storage costs decrease. It can solve secure communication forever. Today you have no certainty if someone successfully attacks existing ciphers even a year later and makes its often expensive implementations insecure. In many cases before communication occurs, when both sides meet personally, that's the time to generate the keys. I think it's perfect for military communication, for example between submarines which can have HDs with large keys, and military central can have a HD for each submarine. It could also be practical in everyday life, for example to control your bank account, because when you create your account you meet with the bank etc.
Yes, this is a One-time pad . If the key material is never re-used, it is theoretically secure. The downsides are that you would need one key per communicating pair of principals and you would need a secure way of exchanging the key material in advance of communicating.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/242657", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/124070/" ] }
242,795
I've seen people talking about Free Monad with Interpreter , particularly in the context of data-access. What is this pattern? When might I want to use it? How does it work, and how would I implement it? I understand (from posts such as this ) that it's about separating model from data-access. How does it differ from the well-known Repository pattern? They appear to have the same motivation.
The actual pattern is actually significantly more general than just data access. It's a lightweight way of creating a domain-specific language that gives you an AST, and then having one or more interpreters to "execute" the AST however you like. The free monad part is just a handy way to get an AST that you can assemble using Haskell's standard monad facilities (like do-notation) without having to write lots of custom code. This also ensures that your DSL is composable : you can define it in parts and then put the parts together in a structured way, letting you take advantage of Haskell's normal abstractions like functions. Using a free monad gives you the structure of a composable DSL; all you have to do is specify the pieces. You just write a data type that encompasses all of the actions in your DSL. These actions could be doing anything, not just data access. However, if you specified all your data accesses as actions, you would get an AST that specifies all the queries and commands to the data store. You could then interpret this however you like: run it against a live database, run it against a mock, just log the commands for debugging or even try optimizing the queries. Lets look at a very simple example for, say, a key value store. For now, we'll just treat both keys and values as strings, but you could add types with a bit of effort. data DSL next = Get String (String -> next) | Set String String next | End The next parameter lets us combine actions. We can use this to write a program that gets "foo" and sets "bar" with that value: p1 = Get "foo" $ \ foo -> Set "bar" foo End Unfortunately, this is not enough for a meaningful DSL. Since we used next for composition, the type of p1 is the same length as our program (ie 3 commands): p1 :: DSL (DSL (DSL next)) In this particular example, using next like this seems a little odd, but it's important if we want our actions to have different type variables. We might want a typed get and set , for example. Note how the next field is different for each action. This hints that we can use it to make DSL a functor: instance Functor DSL where fmap f (Get name k) = Get name (f . k) fmap f (Set name value next) = Set name value (f next) fmap f End = End In fact, this is the only valid way to make it a Functor, so we can use deriving to create the instance automatically by enabling the DeriveFunctor extension. The next step is the Free type itself. That's what we use to represent our AST structure , build on top of the DSL type. You can think of it like a list at the type level, where "cons" is just nesting a functor like DSL : -- compare the two types: data Free f a = Free (f (Free f a)) | Return a data List a = Cons a (List a) | Nil So we can use Free DSL next to give programs of different sizes the same types: p2 = Free (Get "foo" $ \ foo -> Free (Set "bar" foo (Free End))) Which has the much nicer type: p2 :: Free DSL a However, the actual expression with all of its constructors is still very awkward to use! This is where the monad part comes in. As the name "free monad" implies, Free is a monad—as long as f (in this case DSL ) is a functor: instance Functor f => Monad (Free f) where return = Return Free a >>= f = Free (fmap (>>= f) a) Return a >>= f = f a Now we're getting somewhere: we can use do notation to make our DSL expressions nicer. The only question is what to put in for next ? Well, the idea is to use the Free structure for composition, so we will just put Return for each next field and let the do-notation do all the plumbing: p3 = do foo <- Free (Get "foo" Return) Free (Set "bar" foo (Return ())) Free End This is better, but it's still a bit awkward. We have Free and Return all over the place. Happily, there's a pattern we can exploit: the way we "lift" a DSL action into Free is always the same—we wrap it in Free and apply Return for next : liftFree :: Functor f => f a -> Free f a liftFree action = Free (fmap Return action) Now, using this, we can write nice versions of each of our commands and have a full DSL: get key = liftFree (Get key id) set key value = liftFree (Set key value ()) end = liftFree End Using this, here's how we can write our program: p4 :: Free DSL a p4 = do foo <- get "foo" set "bar" foo end The neat trick is that while p4 looks just like a little imperative program, it's actually an expression that has the value Free (Get "foo" $ \ foo -> Free (Set "bar" foo (Free End))) So, the free monad part of the pattern has gotten us a DSL that produces syntax trees with nice syntax. We can also write composable sub-trees by not using End ; for example, we could have follow which takes a key, gets its value and then uses that as a key itself: follow :: String -> Free DSL String follow key = do key' <- get key get key' Now follow can be used in our programs just like get or set : p5 = do foo <- follow "foo" set "bar" foo end So we get some nice composition and abstraction for our DSL as well. Now that we have a tree, we get to the second half of the pattern: the interpreter. We can interpret the tree however we like just by pattern-matching on it. This would let us write code against a real data store in IO , as well as other things. Here's an example against a hypothetical data store: runIO :: Free DSL a -> IO () runIO (Free (Get key k)) = do res <- getKey key runIO $ k res runIO (Free (Set key value next)) = do setKey key value runIO next runIO (Free End) = close runIO (Return _) = return () This will happily evaluate any DSL fragment, even one that isn't ended with end . Happily, we can make a "safe" version of the function that only accepts programs closed with end by setting the input type signature to (forall a. Free DSL a) -> IO () . While the old signature accepts a Free DSL a for any a (like Free DSL String , Free DSL Int and so on), this version only accepts a Free DSL a that works for every possible a —which we can only create with end . This guarantees we won't forget to close the connection when we're done. safeRunIO :: (forall a. Free DSL a) -> IO () safeRunIO = runIO (We can't just start by giving runIO this type because it won't work properly for our recursive call. However, we could move the definition of runIO into a where block in safeRunIO and get the same effect without exposing both versions of the function.) Running our code in IO is not the only thing we could do. For testing, we might want to run it against a pure State Map instead. Writing out that code is a good exercise. So this is the free monad + interpreter pattern. We make a DSL, taking advantage of the free monad structure to do all the plumbing. We can use do-notation and the standard monad functions with our DSL. Then, to actually use it, we have to interpret it somehow; since the tree is ultimately just a data structure, we can interpret it however we like for different purposes. When we use this to manage accesses to an external data store, it is indeed similar to the Repository pattern. It intermediates between our data store and our code, separating the two out. In some ways, though, it's more specific: the "repository" is always a DSL with an explicit AST which we can then use however we like. However, the pattern itself is more general than that. It can be used for lots of things which do not necessarily involve external databases or storage. It makes sense wherever you want fine control of effects or multiple targets for a DSL.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/242795", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/103810/" ] }
242,816
Apple launched its new programming language Swift at WWDC14 . In the presentation, they made some performance comparisons between Objective-C and Python. The following is a picture of one of their slides, of a comparison of those three languages performing some complex object sort: There was an even more incredible graph about a performance comparison using the RC4 encryption algorithm. Obviously this is a marketing talk, and they didn't go into detail on how this was implemented in each. I leaves me wondering though: How can a new programming language be so much faster? Are the Objective-C results caused by a bad compiler or is there something less efficient in Objective-C than Swift? How would you explain a 40% performance increase? I understand that garbage collection/automated reference control might produce some additional overhead, but this much?
First, (IMO) comparing with Python is nearly meaningless. Only comparison with Objective-C is meaningful. How can a new programming language be so much faster? Objective-C is a slow language. (Only C part is fast, but that's because it's C) It has never been extremely fast. It was just fast enough for their (Apple's) purpose, and faster then their older versions. And it was slow because... Do the Objective-C results from a bad compiler or is there something less efficient in Objective-C than Swift? Objective-C guaranteed every method to be dynamically dispatched. No static dispatch at all. That made it impossible to optimize an Objective-C program further. Well, maybe JIT technology can be some help, but AFAIK, Apple really hate unpredictable performance characteristics and object lifetime. I don't think they had adopted any JIT stuff. Swift doesn't have such dynamic dispatch guarantee unless you put some special attribute for Objective-C compatibility. How would you explain a 40% performance increase? I understand that garbage collection/automated reference control might produce some additional overhead, but this much? GC or RC doesn't matter here. Swift is also employing RC primarily. No GC is there, and also will not unless there's some huge architectural leap on GC technology. (IMO, it's forever) I believe Swift has a lot more room for static optimization. Especially low level encryption algorithms, as they usually rely on huge numeric calculations, and this is a huge win for statically dispatch languages. Actually I was surprised because 40% seems too small. I expected far more. Anyway, this is the initial release, and I think optimization was not the primary concern. Swift is not even feature-complete! They will make it better. Update Some keep bugging me to argue that the GC technology is superior. Though stuff below can be arguable, and just my very biased opinion, but I think I have to say to avoid this unnecessary argument. I know what conservative/tracing/generational/incremental/parallel/realtime GCs are and how they are different. I think most of readers also already know that. I also agree that GC is very nice in some field, and also shows high throughput in some cases. Anyway, I suspect the claim of GC throughput is always better than RC. Most of overhead of RC comes from ref-counting operation and locking to protect ref-count number variable. And RC implementation usually provide a way to avoid counting operations. In Objective-C, there's __unsafe_unretained and in Swift, (though it's still somewhat unclear to me) unowned stuffs. If the ref-counting operation cost is not acceptable, you can try to opt-out them selectively by using the mechanics. Theoretically, we can simulate almost unique-ownership scenario by using non-retaining references very aggressively to avoid RC overhead. Also I expect the compiler can eliminate some obvious unnecessary RC operations automatically. Unlike RC system, AFAIK, partial opt-out of reference-types is not an option on GC system. I know there're many released graphics and games which are using GC based system, and also know most of them are suffering by lack of determinism. Not only for performance characteristic, but also object lifetime management. Unity is mostly written in C++, but the tiny C# part causes all the weird performance issues. HTML hybrid apps and still suffering by unpredictable spikes on any system. Used widely doesn't mean that's superior. It just means that's easy and popular to people who don't have many options. Update 2 Again to avoid unnecessary argument or discussion, I add some more details. @Asik provided an interesting opinion about GC spikes. That's we can regard value-type-everywhere approach as a way to opt-out GC stuff. This is pretty attractive, and even doable on some systems (purely functional approach for example). I agree that this is nice in theory. But in practice it has several issues. The biggest problem is partial application of this trick does not provide true spike-free characteristics. Because latency issue is always all or nothing problem. If you have one frame spike for 10 seconds (= 600frames), then the whole system is obviously failing. This is not about how better or worse. It's just pass or fail. (or less then 0.0001%) Then where is the source of GC spike? That's bad distribution of GC load. And that's because the GC is fundamentally indeterministic. If you make any garbage, then it will activate the GC, and spike will happen eventually. Of course, in the ideal world where GC load will be always ideal, this won't happen, but I am living in real world rather than imaginary ideal world. Then if you want to avoid spike, you have to remove all the ref-types from the whole system. But it's hard, insane, and even impossible due to unremovable part such as .NET core system and library. Just using non-GC system is far easier . Unlike GC, RC is fundamentally deterministic, and you don't have to use this insane optimization (purely-value-type-only) just only to avoid spike. What you have to do is tracking down and optimizing the part which causes the spike. In RC systems, spike is local algorithm issue, but in GC systems, spikes are always global system issue. I think my answer is gone too much off-topic, and mostly just repetition of existing discussions. If you really want to claim some superiority/inferiority/alternative or anything else of GC/RC stuffs, there're plenty of existing discussions in this site and StackOverflow, and you can continue to fight at there.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/242816", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/134334/" ] }
243,012
On the Wikipedia page on dependency injection, the disadvantages section tells us this: Dependency injection increases coupling by requiring the user of a subsystem to provide for the needs of that subsystem. with a link to an article against dependency injection . Dependency injection makes a class use the interface instead of the concrete implementation. That should result in decreased coupling , no? What am I missing? How is dependency injection increasing coupling between classes?
So, what am I missing? Dependency Injection decreases coupling between a class and its dependency. But it increases the coupling between a class and its consumer (since the consumer needs more info to create it) and the dependency and its consumer (since the consumer needs to know the dependency to use). Very often, this is a good trade off. The class shouldn't know the details about its dependencies beyond an interface, and it should be the responsibility of the application to tie specific bits of code together.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/243012", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/20065/" ] }
243,154
I've been trying to think of a way of declaring strongly typed typedefs, to catch a certain class of bugs in the compilation stage. It's often the case that I'll typedef an int into several types of ids, or a vector to position or velocity: typedef int EntityID; typedef int ModelID; typedef Vector3 Position; typedef Vector3 Velocity; This can make the intent of code more clear, but after a long night of coding one might make silly mistakes like comparing different kinds of ids, or adding a position to a velocity perhaps. EntityID eID; ModelID mID; if ( eID == mID ) // <- Compiler sees nothing wrong { /*bug*/ } Position p; Velocity v; Position newP = p + v; // bug, meant p + v*s but compiler sees nothing wrong Unfortunately, suggestions I've found for strongly typed typedefs include using boost, which at least for me isn't a possibility (I do have c++11 at least). So after a bit of thinking, I came upon this idea, and wanted to run it by someone. First, you declare the base type as a template. The template parameter isn't used for anything in the definition, however: template < typename T > class IDType { unsigned int m_id; public: IDType( unsigned int const& i_id ): m_id {i_id} {}; friend bool operator==<T>( IDType<T> const& i_lhs, IDType<T> const& i_rhs ); }; Friend functions actually need to be forward declared before the class definition, which requires a forward declaration of the template class. We then define all the members for the base type, just remembering that it's a template class. Finally, when we want to use it, we typedef it as: class EntityT; typedef IDType<EntityT> EntityID; class ModelT; typedef IDType<ModelT> ModelID; The types are now entirely separate. Functions that take an EntityID will throw a compiler error if you try to feed them a ModelID instead, for example. Aside from having to declare the base types as templates, with the issues that entails, it's also fairly compact. I was hoping anyone had comments or critiques about this idea? One issue that came to mind while writing this, in the case of positions and velocities for example, would be that I can't convert between types as freely as before. Where before multiplying a vector by a scalar would give another vector, so I could do: typedef float Time; typedef Vector3 Position; typedef Vector3 Velocity; Time t = 1.0f; Position p = { 0.0f }; Velocity v = { 1.0f, 0.0f, 0.0f }; Position newP = p + v*t; With my strongly typed typedef I'd have to tell the compiler that multypling a Velocity by a Time results in a Position. class TimeT; typedef Float<TimeT> Time; class PositionT; typedef Vector3<PositionT> Position; class VelocityT; typedef Vector3<VelocityT> Velocity; Time t = 1.0f; Position p = { 0.0f }; Velocity v = { 1.0f, 0.0f, 0.0f }; Position newP = p + v*t; // Compiler error To solve this, I think I'd have to specialize every conversion explicitly, which can be kind of a bother. On the other hand, this limitation can help prevent other kinds of errors (say, multiplying a Velocity by a Distance, perhaps, which wouldn't make sense in this domain). So I'm torn, and wondering if people have any opinions on my original issue, or my approach to solving it.
These are phantom type parameters , that is, parameters of a parameterised type that are used not for their representation, but to separate different “spaces” of types with the same representation. And speaking of spaces, that’s a useful application of phantom types: template<typename Space> struct Point { double x, y; }; struct WorldSpace; struct ScreenSpace; // Conversions between coordinate spaces are explicit. Point<ScreenSpace> project(Point<WorldSpace> p, const Camera& c) { … } As you’ve seen, though, there are some difficulties with unit types. One thing you can do is decompose units into a vector of integer exponents on the fundamental components: template<typename T, int Meters, int Seconds> struct Unit { Unit(const T& value) : value(value) {} T value; }; template<typename T, int MA, int MB, int SA, int SB> Unit<T, MA - MB, SA - SB> operator/(const Unit<T, MA, SA>& a, const Unit<T, MB, SB>& b) { return a.value / b.value; } Unit<double, 0, 0> one(1); Unit<double, 1, 0> one_meter(1); Unit<double, 0, 1> one_second(1); // Unit<double, 1, -1> auto one_meter_per_second = one_meter / one_second; Here we’re using phantom values to tag runtime values with compile-time information about the exponents on the units involved. This scales better than making separate structures for velocities, distances, and so on, and might be enough to cover your use case.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/243154", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/134748/" ] }
243,269
As I understand, the cause of the speed difference between compiled languages and python is, that the first compiles code all way to the native machine's code, whereas python compiles to python bytecode, to be interpreted by the PVM. I see that this way python codes can be used on multiple operation system (at least in most cases), however I do not understand, why is not there an additional (and optional) compiler for python, which compiles the same way as traditional compilers. This would leave to the programmer to chose, which is more important to them; multiplatform executability or performance on native machine. In general; why are not there any languages which could be behave both as compiled and interpreted?
No. The reason why there are speed differences between languages like Python and C++ is because statically-typed languages give the compiler tons of information about the structure of the program and its data which allows it to optimize both computations and memory access. Because C++ knows that variable is of type int, it can determine the optimal way to manipulate that variable even before the program is run. In Python on the other hand, the runtime doesn't know what value is in a variable right until the line is reached by the interpreter. This is extremely important for structures, where in C++, the compiler can easily tell the size of the structure and every location of its fields within memory during compilation. This gives it huge power in predicting how the data might be used and lets it optimize according to those predictions. No such thing is possible for languages like Python. To effectively compile languages like Python you would need to: Ensure that the structure of data is static during the execution of the program. This is problematic because Python has eval and metaclasses. Both which make it possible to change the structure of the program based on the input of the program. This is one of the things that give Python such expressive power. Infer the types of all variables, structures and classes from the source code itself. While it is possible to some degree, the static type system and algorithm would be so complex it would be almost impossible to implement in a usable way. You could do it for a subset of the language, but definitely not for the whole set of language features.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/243269", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/134888/" ] }
244,354
(This is an extremely newbie-ish question). I've been studying a little about Virtual Machines. Turns out a lot of them are designed very similarly to physical or theoretical computers. I read that the JVM for example, is a 'stack machine'. What that means (and correct me if I'm wrong) is that it stores all of it's 'temporary memory' on a stack, and makes operations on this stack for all of it's opcodes. For example, the source code 2 + 3 will be translated to bytecode similar to: push 2 push 3 add My question is this: JVMs are probably written using C/C++ and such. If so, why doesn't the JVM execute the following C code: 2 + 3 ..? I mean, why does it need a stack, or in other VMs 'registers' - like in a physical computer? The underlying physical CPU takes care of all of this. Why don't VM writers simply execute the interpreted bytecode with 'usual' instructions in the language the VM is programmed with? Why do VMs need to emulate hardware, when the actual hardware already does this for us? Again, very newbie-ish questions. Thanks for your help
A machine, virtual or not, needs a model of computation which describes how computation is carried out on it. By definition, as soon as it computes, it implements some model of computation. The question then is: What model should we choose for our VM? Physical machines are constrained by what can be effectively and efficiently done in hardware. But, as you note, virtual machines have no such constraints, they are defined in software using arbitrarily high level languages. There are, in fact, virtual machines that are high-level as you describe. They are called programming languages . The C standard for example dedicates the bulk of its pages to defining a model for the so-called "C abstract machine" which describes how C programs behave, and by extension (as-if rule) how a conforming C compiler (or interpreter) should behave. Of course, we usually don't call that a virtual machine. A VM is usually takes to mean something lower-level, closer to hardware, not intended to be directly programmed, designed to be executed efficiently. This selection bias means that something that accepts high-level composable code (like what you describe) wouldn't be considered a VM because is executes high-level code. But to get the to the point, here are some reasons to make a VM (as in, something targeted by a bytecode compiler) register-based or the like. Stack and register machines are extremely simple. There's a sequence of instructions, some state, and semantics for each instruction (a function State -> State). No complex tree reductions, no operator precedence. Parsing, analysing and executing it is very simple, because it's a minimal language (syntactic sugar is compiled away) and designed to be machine-read rather than human-read. In contrast, parsing even the simplest C-like languages is quite hard, and executing it requires non-local analyses like checking and propagating types, resolving overloads, maintaining a symbol table, resolving string identifiers, turning linear text into a precedence-driven AST, and so on. It builds on concepts that come natural to humans but have to be painstakingly reverse engineered by machines. JVM bytecode, for example, is emitted by javac . It virtually never needs to be read or written by humans, so it's natural to gear it towards consumption by machines. If you optimized it for humans, the JVM would just on every startup read the code, parse it, analyze is, and then convert it into an intermediate representation resembling such a simplified machine model anyway . Might as well cut out the middle man.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/244354", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/121368/" ] }
244,373
I've learned how to program primarily from an OOP standpoint (like most of us, I'm sure), but I've spent a lot of time trying to learn how to solve problems the functional way. I have a good grasp on how to solve calculational problems with FP, but when it comes to more complicated problems I always find myself reverting to needing mutable objects. For example, if I'm writing a particle simulator, I will want particle "objects" with a mutable position to update. How are inherently "stateful" problems typically solved using functional programming techniques?
Functional programs handle state very well, but require a different way of looking at it. For your position example, one thing to consider is having your position be a function of time instead of a fixed value . This works well for particles following a fixed mathematical path, but you require a different strategy for handling a change in the path, such as after a collision. The basic strategy here is you create functions that take in a state and return the new state . So a particle simulator would be a function that takes a Set of particles as input and returns a new Set of particles after a time step. Then you just repeatedly call that function with its input set to its previous result.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/244373", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/134341/" ] }
244,449
Everyone talks about legacy code in software development and I have heard the term over the last ten years used to paint any codebase as being bad. Where did this term, which has such powerful connotations to programmers alike originate? I am sure there must be some book on software development that pioneered this term. I would love to locate the origin of the term "legacy code".
Legacy code is based on the phrase of a legacy system that specifically applies to code. According to Wikipedia it probably dates back to the 1970s and was in common usage in the 1980s. It took off with the tech explosion of the 1990s. This can be seen with Google's ngram viewer: legacy system,legacy code Digging into this further, you can find documented uses of the term 'legacy system' in the 1970s . The earliest example of 'legacy system' that google has is in a book on Proceedings of the Army Numerical Analysis and Computers Conference from 1978: ... well strutted and documented solution to a clearly defined problem is the legacy system operation needs to be understood and to change the existing system with confidence. There is also an example of 'legacy system' being used outside of the technology industry in Clout: Womanpower and Politics grin 1976: ... in addition, she holds a seat as the third-ranking Democrat on the powerful Banking and Currency Committee - positions of power she has built up on her own, not via the legacy system. Beyond these example which shows its use has extended beyond the pure software world, the specifics of where exactly the term originated are probably lost to the sands of time. Given the military and political references, it may have originated with them (primarily the military and its jargon migration ( "It seems likely that 'kluge' came to MIT via alumni of the many military electronics projects run in Cambridge during the war (many in MIT's venerable Building 20, which housed TMRC..." ))
{ "source": [ "https://softwareengineering.stackexchange.com/questions/244449", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/13364/" ] }
244,476
I recently noticed decoupling as a topic in a question, and want to know what it is and where it can apply. By "where can it apply", I mean: Is it only relevant where compiled languages like C and Java are involved? Should I know about / study it as a web developer?
'Coupling' is a term that describes the relationship between two entities in a software system (usually classes). When a class uses another class, or communicates with it, it's said to 'depend' on that other class, and so these classes are 'coupled'. At least one of them 'knows' about the other. The idea is that we should try to keep the coupling between classes in our systems as 'loose' as possible: hence 'loose coupling' or sometimes 'decoupling' (although in English 'decoupling' would mean 'no coupling at all', people often use it to imply 'loose coupling' between entities). So: what is loose-coupling versus strong coupling in practice, and why should we make entities loosely-coupled? Coupling describes the degree of dependency between one entity to another entity. Often classes or objects. When ClassA depends heavily on ClassB, the chances of ClassA being affected when ClassB is changed are high. This is strong coupling. However if ClassA depends lightly on ClassB, than the chances of ClassA being affected in any way by a change in the code of ClassB, are low. This is loose coupling, or a 'decoupled' relationship. Loose coupling is good because we don't want the components of our system to heavily depend on each other. We want to keep our system modular, where we can safely change one part without affecting the other. When two parts are loosely coupled, they are more independent of each other and are less likely to break when the other changes. For example, when building a car, you wouldn't want an internal change in the engine to break something in the steering wheel. While this would never happen by accident when building a car, similar things happen to programmers all the time. Loose coupling is meant to reduce the risk of such things happening. Strong coupling usually occurs when entity A knows too much about entity B. If entity A makes too many assumptions about how entity B operates or how it is built, then there is a high risk that a change in entity B will affect entity A. This is because one of its assumptions about entity B are now incorrect. For example, imagine that as a driver, you would make certain assumptions about how the engine of your car works. The day you buy a new car with an engine that works differently (or for some reason your engine was replaced), your previous assumptions would be incorrect. If you were code in a computer, you would now be incorrect code that doesn't work properly. However, if all the assumptions that as a driver you made about cars is that: A- they have steering wheels and B- they have brake and gas pedals, then changes in the car won't affect you, as long as your few assumptions stay correct. This is loose coupling. An important technique to achieve loose coupling is Encapsulation. The idea is that a class hides its internal details from other classes, and offers a strictly defined interface for other classes to communicate with it. So for example, if you were defining a class Car, it's interface (public methods) would probably be drive() , stop() , steerLeft() , steerRight() , getSpeed() . These are the methods other objects can invoke on Car objects. All of the other details of the Car class: how the engine works, kind of fuel it uses, etc. are hidden from other classes - to prevent them from knowing too much about Car. The moment class A knows too much about class B: we have a strongly coupled relationship, where class A is too dependent on class B and a change in class B is likely to affect class A. Making the system hard to expand and maintain. A relationship between two entities, where they know little about each other (only what's necessary) - is a loosely coupled, or decoupled relationship.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/244476", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/-1/" ] }
244,593
I am looking for a job and have applied to a number of positions. One employer responded. I had a pretty lengthy phone interview (perhaps an hour +) and they then set me up with a developer test. I was told that the test was estimated to take between 6 and 8 hours and that, provided the results met with their approval, I would be paid for my work. That gave me some pause, but I endeavored. The developer test took place on a VM accessed via RDP . The task was to implement a search page in a web project that requests data from the server, displays it on the screen in a table, has a pretty complicated search filtering scheme (there are about 15 statuses and when sending the search to the server you can search by these statuses) in addition to the string/field search. Additionally, they want SVG icons to change color on certain data values, and some data represented differently than how it's structured in the database. Loooong story short, this took a heck of a lot longer than 6-8 hours. Much of it was due to the very poor VM that I was running on (Visual Studio 2013 took 10 minutes to load, and another 15 minutes to open the 3 GB ginormous solution). I was told that after completing the test I should commit my changes to source control... Hmm, OK. I followed directions. And after committing the changes, I was emailed a response. The SVGs weren't colored right, there was a bug in this edge-case, there was an occasional problem with this other thing that I never experienced, etc. So I am 13-14 hours into this thing now, and I have to do bug fixes. I do them, and the employer comes back with more bug fix requests. All my work is apparently going into a production application. I noticed a few anomalies in the code where it looked like others had coded all of one functionality but hadn't touched anything else. Am I just being used for cheap labor? Even if they pay me the promised 50 dollars an hour for 6 hours, I have committed about 18 hours to this thing now. If I bug fix all of the stuff they keep coming up with, I will have worked at least 16 hours for free. I have taken a number of developer tests, but I have never taken one during which I worked on code destined for production. I have never taken a test where I implemented a feature that was in the pipeline for development, and I have never taken one that took 4 rounds and a total of 20+ hours. I get the impression they are using their developer test to field some of the functionality on the cheap. Do I have the wrong impression? And is this testing protocol appropriate?
I would never participate in a code test of this nature. I have taken many code tests and done many code projects. I certainly wouldn't check code into someone else's repository under any circumstance. If they don't know what they need to know after a 4 hour sample with some minor bug correction in a pair-programming session, then they won't ever know. Going into a test, you should know and make clear a few things up front: It should be agreed upon and known that any work produced during the test may not be used for any purpose other than determining your skill at the required tasks. A code test should not last more than 4 hours. You are not an employee of the company, so any suggestion that you might be paid for code produced is preposterous. Insist on a written contract of payment if there is even a hint of this. Set specific limits on the time you will spend on any given part of the test, and then stick to those limits. If you find yourself going over the limits for any reason, consider why you are going over that limit. Is it because of pressure from them? Is it because you've made mistakes? Is it because you've poorly estimated how long something should take to complete? Stand your ground if you feel you have covered a particular topic. If you've already fixed a bug, and they're asking you to fix a nearly identical bug, say "We've already covered that topic with bug x, perhaps we could move to something else that demonstrates something new." Under no circumstances should you check anything into a production pipeline. This includes into any kind of development branch that may ultimately lead to a production pipeline. When in doubt, check nothing in. For code tests that are not necessarily in person, I insist that the code be checked into my personal public repository first. This gives me at least some kind of protection from having my work used inappropriately. Judge them for their behavior every bit as much as they are judging you. If you feel they are not being up front with you, call them on it. If you feel you are being mistreated, speak up. The company you are interviewing with is also being interviewed by you. If this is how they are treating someone they are interviewing, is this a company you want to work for? I understand that often people have a need for a job and often this need will override some common sense concepts, but this should always be in the forefront of your mind. Don't be afraid to walk out. If it doesn't feel right, follow your instincts and vote with your feet.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/244593", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/28698/" ] }
244,608
Let's say that I am writing two different versions of the same software/program/app/script and storing them under version control. The first version is a free "Basic" version, while the second is a paid "Premium" version that takes the codebase of the free version and expands upon it with a few extra value-added features. Any new patches, fixes, or features need to find their way into both versions. I am currently considering using master and develop branches for the main codebase (free version) along side master-premium and develop-premium branches for the paid version. When a change is made to the free version and merged to the master branch (after thorough testing on develop of course), it gets copied over to the develop-premium branch via the cherry-pick command for more testing and then merged into master-premium . Is this the best workflow to handle this situation? Are there any potential problems, caveats, or pitfalls to be aware of? Is there a better branching strategy than what I have already come up with? Your feedback is highly appreciated! P.S. This is for a PHP script stored in Git, but the answers should apply to any language or VCS.
Instead of having two code version with a common base you should design your application in a way to make those premium features plug-able and driven by configuration rather than different code bases. If you are afraid to ship those premium features (disabled by configuration) with the basic version you can still remove that code in a final build/packaging step and just have two build profiles. Having this design you can also ship 5 different flavors and get very flexible, maybe even allowing third parties to contribute.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/244608", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/62454/" ] }
244,630
If there is a method bool DoStuff() { try { // doing stuff... return true; } catch (SomeSpecificException ex) { return false; } } should it rather be called IsStuffDone() ? Both names could be misinterpreted by the user: If the name is DoStuff() why does it return a boolean? If the name is IsStuffDone() it is not clear whether the method performs a task or only checks its result. Is there a convention for this case? Or an alternative approach, as this one is considered flawed? For example in languages that have output parameters, like C#, a boolean status variable could be passed to the method as one and the method's return type would be void . EDIT: In my particular problem exception handling cannot be directly delegated to the caller, because the method is a part of an interface implementation. Therefore, the caller can't be charged with dealing with all the exceptions of different implementations. It is not familiar with those exceptions. However, the caller can deal with a custom exception like StuffHasNotBeenDoneForSomeReasonException as was suggested in npinti's answer and comment .
In .NET, you often have pairs of methods where one of them might throw an exception ( DoStuff ), and the other returns a Boolean status and, on successful execution, the actual result via an out parameter ( TryDoStuff ). (Microsoft calls this the "Try-Parse Pattern" , since perhaps the most prominent example for it are the TryParse methods of various primitive types.) If the Try prefix is uncommon in your language, then you probably shouldn't use it.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/244630", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/95626/" ] }
244,684
One of the methods that I commonly use in our codebase is misspelled (and it predated me). This really irritates me not simply because it is mispelled but more importantly it makes me ALWAYS get the name wrong the first time I type it (and then I have to remember "Oh, right, it should be mispelled to this...") I'm making a few changes around the original method. Should I take the opportunity to just rename the freaking method?
Should I take the opportunity to just rename the freaking method? Absolutely. That said, if your code has been released as an API, you should also generally leave the misspelled method and have it forward to the correctly named method (marking it Obsolete if your language supports such things).
{ "source": [ "https://softwareengineering.stackexchange.com/questions/244684", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/81144/" ] }
244,750
To give you a little background: I work for a company with roughly twelve Ruby on Rails developers (+/- interns). Remote work is common. Our product is made out of two parts: a rather fat core, and thin up to big customer projects built upon it. Customer projects usually expand the core. Overwriting of key features does not happen. I might add that the core has some rather bad parts that are in urgent need of refactorings. There are specs, but mostly for the customer projects. The worst part of the core are untested (not as it should be...). The developers are split into two teams, working with one or two PO for each sprint. Usually, one customer project is strictly associated with one of the teams and POs. Now our problem: Rather frequently, we break each other's stuff. Someone from Team A expands or refactors the core feature Y, causing unexpected errors for one of Team B's customer projects. Mostly, the changes are not announced over the teams, so the bugs hit almost always unexpected. Team B, including the PO, thought about feature Y to be stable and did not test it before releasing, unaware of the changes. How to get rid of those problems? What kind of 'announcement technique' can you recommend me?
I would recommend reading Working Effectively with Legacy Code by Michael C. Feathers . It explains that you really need automated tests, how you can easily add them, if you don't already have them, and what "code smells" to refactor in what way. Besides that, another core problem in your situation seems a lack of communication between the two teams. How big are these teams? Are they working on different backlogs? It's almost always bad practice to split up teams according to your architecture. E.g. a core team and a non-core team. Instead, I would create teams on functional domain, but cross-component.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/244750", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/41758/" ] }
244,878
I'm implementing an IRC bot that receives a message and I'm checking that message to determine which functions to call. Is there a more clever way of doing this? It seems like it'd quickly get out of hand after I got up to like 20 commands. Perhaps there's a better way to abstract this? public void onMessage(String channel, String sender, String login, String hostname, String message){ if (message.equalsIgnoreCase(".np")){ // TODO: Use Last.fm API to find the now playing } else if (message.toLowerCase().startsWith(".register")) { cmd.registerLastNick(channel, sender, message); } else if (message.toLowerCase().startsWith("give us a countdown")) { cmd.countdown(channel, message); } else if (message.toLowerCase().startsWith("remember am routine")) { cmd.updateAmRoutine(channel, message, sender); } }
Use a dispatch table . This is a table containing pairs ("message part", pointer-to-function ). The dispatcher then will look like this (in pseudo code): for each (row in dispatchTable) { if(message.toLowerCase().startsWith(row.messagePart)) { row.theFunction(message); break; } } (the equalsIgnoreCase can be handled as a special case somewhere before, or if you have many of those tests, with a second dispatch table). Of course, what pointer-to-function has to look like depends on your programming language. Here is an example in C or C++. In Java or C# you will probably use lambda expressions for that purpose, or you simulate "pointer-to-functions" by using the command pattern. The free online book " Higher Order Perl " has a complete chapter about dispatch tables using Perl.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/244878", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/136509/" ] }
245,189
I understand the basics of what data races are, and how locks/mutexes/semaphores help prevent them. But what happens if you have a "race condition" on the lock itself? For example, two different threads, perhaps in the same application, but running on different processors, try to acquire a lock at the exact same time . What happens then? What is done to prevent that? Is it impossible, or just plain unlikely? Or is it a real race condition waiting to happen?
Is it impossible, or just plain unlikely? Impossible. It can be implemented in different ways, e.g., via the Compare-and-swap where the hardware guarantees sequential execution. It can get a bit complicated in presence of multiple cores or even multiple sockets and needs a complicated protocol between the cores, but this is all taken care of.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/245189", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/136829/" ] }
245,198
In order to handle several possible errors that shouldn't halt execution, I have an error variable that clients can check and use to throw exceptions. Is this an Anti-Pattern? Is there a better way to handle this? For an example of this in action you can see PHP's mysqli API. Assume that visibility problems (accessors, public and private scope, is the variable in a class or global?) are handled correctly.
If a language inherently supports exceptions, then it is preferred to throw exceptions and the clients can catch the exception if they do not want it to result in a failure. In fact, the clients of your code expect exceptions and will run into many bugs because they will not be checking the return values. There are quite a few advantages to using exceptions if you have a choice. Messages Exceptions contain user readable error messages which can be used by the developers for debugging or even displayed to the users if so desired. If the consuming code cannot handle the exception, it can always log it so the developers can go through the logs without having to stop at every other trace to figure out what was the return value and map it in a table to figure out what was the actual exception. With return values, there is no additional information can be easily provided. Some languages will support making method calls to get the last error message, so this concern is allayed a bit, but that requires the caller to make extra calls and sometimes will require access to a 'special object' that carries this information. In the case of exception messages, I provide as much context as possible, such as: A policy of name "foo" could not be retrieved for user "bar", which was referenced in user's profile. Compare this to a return code -85. Which one would you prefer? Call stacks Exceptions usually also have detailed call stacks which help debug code faster and quicker, and can also be logged by the calling code if so desired. This allows the developers to pinpoint the issue usually to the exact line, and thus is very powerful. Once again, compare this to a log file with return values (such as a -85, 101, 0, etc.), which one would you prefer? Fail fast biased approach If a method is called somewhere that fails, it will throw an exception. The calling code has to either suppress the exception explicitly or it will fail. I have found this to be actually amazing because during development and testing (and even in production) the code fails quickly, forcing the developers to fix it. In the case of return values, if a check for a return value is missed, the error is silently ignored and the bug surfaces somewhere unexpected, usually with a much higher cost to debug and fix. Wrapping and Unwrapping Exceptions Exceptions can be wrapped inside other exceptions and then unwrapped if needed. For example, your code might throw ArgumentNullException which the calling code might wrap inside a UnableToRetrievePolicyException because that operation had failed in the calling code. While the user might be shown a message similar to the example I provided above, some diagnostic code might unwrap the exception and find that an ArgumentNullException had caused the issue, which means it is a coding error in your consumer's code. This could then fire an alert so the developer can fix the code. Such advanced scenarios are not easy to implement with the return values. Simplicity of code This one is a bit harder to explain, but I learnt through this coding both with return values as well as exceptions. The code that was written using return values would usually make a call and then have a series of checks on what the return value was. In some cases, it would make call to another method, and now will have another series of checks for the return values from that method. With exceptions, the exception handling is far simpler in most if not all cases. You have a try/catch/finally blocks, with the runtime trying its best to execute the code in the finally blocks for clean-up. Even nested try/catch/finally blocks are relatively easier to follow through and maintain than nested if/else and associated return values from multiple methods. Conclusion If the platform you are using supports exceptions (esp. such as Java or .NET), then you should definitely assume that there is no other way except to throw exceptions because these platforms have guidelines to throw exceptions, and your clients are going to expect so. If I were using your library, I will not bother to check the return values because I expect exceptions to be thrown, that's how the world in these platforms is. However, if it were C++, then it would be a bit more challenging to determine because a large codebase already exists with return codes, and a large number of developers are tuned to return values as opposed to exceptions (e.g. Windows is rife with HRESULTs). Furthermore, in many applications, it can be a performance issue too (or at least perceived to be).
{ "source": [ "https://softwareengineering.stackexchange.com/questions/245198", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/132717/" ] }
245,219
I work in C# and MSSQL and as you'd expect I store my passwords salted and hashed. When I look at the hash stored in an nvarchar column (for example the out the box aspnet membership provider). I've always been curious why the generated Salt and Hash values always seem to end in either one or two equals signs. I've seen similar things while working with encryption algorithms, is this coincidence or is there a reason for it?
These hashed string are (usually?) coded in the Base64 format and the equal sign are used for padding the string to make the length (number of bytes) divisible by three. Wikipedia explains it pretty well: http://en.wikipedia.org/wiki/Base64 .
{ "source": [ "https://softwareengineering.stackexchange.com/questions/245219", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/29899/" ] }
245,255
Applications can always throw errors. If such an error occurs, the user should be notified, because what he asked the application to do has not succeeded. However, how much information should the user be given? I think most of us agree on not showing a stack trace ( Should a stack trace be in the error message presented to the user? ), but I can't find a question about the rest of the error contents or what to show to the user. For example, a language supporting exceptions (.net, java) has the exception type to share, where the exception occured, and a somewhat clarifying message to go along with the exception. Should this also be hidden from the user? Or should we show this anyway? Or should we show a generic message? or should we show one of a number of messages based on what the underlying exception is?
what to show to the user. Should this also be hidden from the user? You show the user what is actionable for them. For example, if you have an error which is caused because of some null pointer exception and more of a bug than user error you don't want full explanation because they can't do anything different. Or should we show this anyway? Or should we show a generic message? Showing the exception as the primary error message content is pointless for most users . Perhaps if your target user base is developers you could show the information as the full error all the time (maybe you have an internal application for automated testing). But generally users cannot do anything different even with that knowledge. should we show one of a number of messages based on what the underlying exception is? The best strategy is to do the following: Interpret the error into text which is meaningful for the user. Part of this is "what can the user do differently?" If they can't do anything different, say something like "an unexpected error has occurred." Add an "optional" detailed error description Allow users to submit the error report (or do this automatically, depending on user base) Example It shows the "here's what happened" (unexpected error) Tells user what to do (reopen Mail, even includes a shortcut to do this) Also has a "view details" if someone is curious to see the full technical error Provides notification an error error report is filed (see below) Note that in some cases you may wish to make the error report be manual vs automatic.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/245255", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/109489/" ] }
245,381
Does anyone have a good mental model or metaphor for functional programming which references something in the real world? Object Oriented programing intuitively makes sense to me. There are things that have properties and sometimes they also can do stuff or perform calculations on their properties (methods). (Ex: Car, Shape, Cat). I bear functional programming no ill will whatsoever and I am not interested in a debate about the virtues of the two. I just need a metaphor or mental model to work with as I have with Object Oriented programming. What are some good mental models or real world metaphors for programming in a functional paradigm? There is something about functions composed of functions processing functions which leaves one without a firm place to stand and cogitate.
Functional programming is all about gluing smaller functions together to achieve your results. A decent mental model (for me, at least) is an assembly line. Each function that gets composed is one more step in the assembly process. Consider this function here: smallest = head . sort In Haskell, this function will return the smallest element in a list. The assembly line first sorts the input, then returns the first element (assuming it's sorted least to greatest.) If we wanted to only get the smallest even value, then we can change the assembly line to look like the following: smallestEven = head . sort . filter even It's just one more step on the conveyor belt. In a nutshell, functions just describe the steps taken to convert the raw input (the parts) into the processed good (the output.)
{ "source": [ "https://softwareengineering.stackexchange.com/questions/245381", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/26056/" ] }
245,393
In Java there are no virtual , new , override keywords for method definition. So the working of a method is easy to understand. Cause if DerivedClass extends BaseClass and has a method with same name and same signature of BaseClass then the overriding will take place at run-time polymorphism (provided the method is not static ). BaseClass bcdc = new DerivedClass(); bcdc.doSomething() // will invoke DerivedClass's doSomething method. Now come to C# there can be so much confusion and hard to understand how the new or virtual+derive or new + virtual override is working. I'm not able to understand why in the world I'm going to add a method in my DerivedClass with same name and same signature as BaseClass and define a new behaviour but at the run-time polymorphism, the BaseClass method will be invoked! (which is not overriding but logically it should be). In case of virtual + override though the logical implementation is correct, but the programmer has to think which method he should give permission to user to override at the time of coding. Which has some pro-con (let's not go there now). So why in C# there are so much space for un -logical reasoning and confusion. So may I reframe my question as in which real world context should I think of use virtual + override instead of new and also use of new instead of virtual + override ? After some very good answers especially from Omar , I get that C# designers gave stress more about programmers should think before they make a method, which is good and handles some rookie mistakes from Java. Now I've a question in mind. As in Java if I had a code like Vehicle vehicle = new Car(); vehicle.accelerate(); and later I make new class SpaceShip derived from Vehicle . Then I want to change all car to a SpaceShip object I just have to change single line of code Vehicle vehicle = new SpaceShip(); vehicle.accelerate(); This will not break any of my logic at any point of code. But in case of C# if SpaceShip does not override the Vehicle class' accelerate and use new then the logic of my code will be broken. Isn't that a disadvantage?
Since you asked why C# did it this way, it's best to ask the C# creators. Anders Hejlsberg, the lead architect for C#, answered why they chose not to go with virtual by default (as in Java) in an interview , pertinent snippets are below. Keep in mind that Java has virtual by default with the final keyword to mark a method as non-virtual. Still two concepts to learn, but many folks do not know about the final keyword or don't use it proactively. C# forces one to use virtual and new/override to consciously make those decisions. There are several reasons. One is performance . We can observe that as people write code in Java, they forget to mark their methods final. Therefore, those methods are virtual. Because they're virtual, they don't perform as well. There's just performance overhead associated with being a virtual method. That's one issue. A more important issue is versioning . There are two schools of thought about virtual methods. The academic school of thought says, "Everything should be virtual, because I might want to override it someday." The pragmatic school of thought, which comes from building real applications that run in the real world, says, "We've got to be real careful about what we make virtual." When we make something virtual in a platform, we're making an awful lot of promises about how it evolves in the future. For a non-virtual method, we promise that when you call this method, x and y will happen. When we publish a virtual method in an API, we not only promise that when you call this method, x and y will happen. We also promise that when you override this method, we will call it in this particular sequence with regard to these other ones and the state will be in this and that invariant. Every time you say virtual in an API, you are creating a call back hook. As an OS or API framework designer, you've got to be real careful about that. You don't want users overriding and hooking at any arbitrary point in an API, because you cannot necessarily make those promises. And people may not fully understand the promises they are making when they make something virtual. The interview has more discussion about how developers think about class inheritance design, and how that led to their decision. Now to the following question: I'm not able to understand why in the world I'm going to add a method in my DerivedClass with same name and same signature as BaseClass and define a new behaviour but at the run-time polymorphism, the BaseClass method will be invoked! (which is not overriding but logically it should be). This would be when a derived class wants to declare that it does not abide by the contract of the base class, but has a method with the same name. (For anyone who doesn't know the difference between new and override in C#, see this Microsoft Docs page ). A very practical scenario is this: You created an API, which has a class called Vehicle . I started using your API and derived Vehicle . Your Vehicle class did not have any method PerformEngineCheck() . In my Car class, I add a method PerformEngineCheck() . You released a new version of your API and added a PerformEngineCheck() . I cannot rename my method because my clients are dependent on my API, and it would break them. So when I recompile against your new API, C# warns me of this issue, e.g. If the base PerformEngineCheck() was not virtual : app2.cs(15,17): warning CS0108: 'Car.PerformEngineCheck()' hides inherited member 'Vehicle.PerformEngineCheck()'. Use the new keyword if hiding was intended. And if the base PerformEngineCheck() was virtual : app2.cs(15,17): warning CS0114: 'Car.PerformEngineCheck()' hides inherited member 'Vehicle.PerformEngineCheck()'. To make the current member override that implementation, add the override keyword. Otherwise add the new keyword. Now, I must explicitly make a decision whether my class is actually extending the base class' contract, or if it is a different contract but happens to be the same name. By making it new , I do not break my clients if the functionality of the base method was different from the derived method. Any code that referenced Vehicle will not see Car.PerformEngineCheck() called, but code that had a reference to Car will continue to see the same functionality that I had offered in PerformEngineCheck() . A similar example is when another method in the base class might be calling PerformEngineCheck() (esp. in the newer version), how does one prevent it from calling the PerformEngineCheck() of the derived class? In Java, that decision would rest with the base class, but it does not know anything about the derived class. In C#, that decision rests both on the base class (via the virtual keyword), and on the derived class (via the new and override keywords). Of course, the errors that the compiler throws also provide a useful tool for the programmers to not unexpectedly make errors (i.e. either override or provide new functionality without realizing so.) Like Anders said, real world forces us into such issues which, if we were to start from scratch, we would never want to get into. EDIT: Added an example of where new would have to be used for ensuring interface compatibility. EDIT: While going through the comments, I also came across a write-up by Eric Lippert (then one of the members of C# design committee) on other example scenarios (mentioned by Brian). PART 2: Based on updated question But in case of C# if SpaceShip does not override the Vehicle class' accelerate and use new then the logic of my code will be broken. Isn't that a disadvantage? Who decides whether SpaceShip is actually overriding the Vehicle.accelerate() or if it's different? It has to be the SpaceShip developer. So if SpaceShip developer decides that they are not keeping the contract of the base class, then your call to Vehicle.accelerate() should not go to SpaceShip.accelerate() , or should it? That is when they will mark it as new . However, if they decide that it does indeed keep the contract, then they will in fact mark it override . In either case, your code will behave correctly by calling the correct method based on the contract . How can your code decide whether SpaceShip.accelerate() is actually overriding Vehicle.accelerate() or if it is a name collision? (See my example above). However, in the case of implicit inheritance, even if SpaceShip.accelerate() did not keep the contract of Vehicle.accelerate() , the method call would still go to SpaceShip.accelerate() .
{ "source": [ "https://softwareengineering.stackexchange.com/questions/245393", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/95732/" ] }
245,455
I'm an Italian developer, but I have a good understanding of English. Sometimes, when developing an application targeted for an Italian audience, I wonder whether it is correct to use the Italian language in my code or not. By "Italian language in my code" I mean the names of methods, classes, comments, variables and so on. For example, when I write code like this: /* Attenzione: metodo esageratamente complicato */ public double calcolaImposteDeiServizi() { ... } Do I break any sacred law of programming because I didn't write that code as follows? /* Caution: overly complicated method */ public double calculateTaxesOfServices() { ... } I remember a project I worked on some time ago. It was about calculating VAT /taxes/bonuses. Some of this code was dealing with concepts that existed only in the Italian economy at that moment. I preferred to write that project using only Italian names for methods, otherwise it would clearly have become a mess understanding that VAT was Italian's IVA and so on. By using this example, should there be some kind of rule for deciding whenever to use your language or not in code? Did any highly authoritative programmer ever said something about this issue? How do you make this kind of decision in your projects?
This is a very good question. In general, I prefer to keep things in English, because it is more or less the de-facto standard for software development. However, I also believe in creating domain models that represent the actual business, and the domain model should be described in terms that make sense to the business stakeholders. And if the business is not natively English, then creating an English domain model violates this principle. And what is then in reality happening is that you, the developers, invent translations which may or may not be correct. And there are most likely terms in your business that does not translate into English. One such example is the concept of "Sygedagpenge" in Denmark. It's a system where people with a job can get public benefits if they over a long time period are not able to fulfill that job because of a medical condition,. This word cannot be translated into any language because the system is purely Danish. Other countries probably have similar systems, but it's not the same system. So don't try to translate very country specific domain terms*. Whether or not you then write the entire domain model/business code in the native language of the business, or if you translate as much to English as possible, only keeping the untranslatable words in the native language of the business is up to you**. But creating the entire domain model in the native language of the business will help you, the developers, to better speak the business language with the business stakeholders. But personally I would keep all non-business code in English, e.g. infrastructure and gui code. * Given that you say that you work with taxes, vat, etc. I guess that you do have some country specific terms, as you are probably dealing with rules dictated by legislation in your country. ** The system I am currently working on, most of the domain model is in English, but we have a few concepts that cannot be translated, so we have kept those in Danish. I think this approach is working pretty well in our case. But that doesn't mean that I would prefer that approach for the next project.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/245455", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/137097/" ] }
245,763
I recently had a job interview in which they gave me an hour to write some real code. It wasn't a huge amount, probably less than 100 lines. After about 45 minutes, I compiled, ran it, and got it to work. I may have spent 5-10 minutes working out compile errors and a couple minor bugs, but overall it was very smooth. (Incidentally, I did get an offer from them.) However, what puzzled me was that after I handed over the completed code, the interviewer told me that the only thing I did wrong was "not compiling as I go along". I asked him what the difference is, and he said "what would you have done if you finished the code and it didn't compile in time". In my understanding that's an invalid argument, because "getting code to compile" for a given length of code generally involves fixing a constant number of compile errors and takes a fairly constant amount of time, which should be the same whether you do it after you finish writing the code, or if you interleave it with your coding time. If anything, interrupting your coding to search for missing semicolons would probably be detrimental to your efficiency. Except in extreme circumstances when I'm experimenting with obscurities around edge-cases on things like virtual functions in derived classes, etc. it seems reasonable to expect that code written by a an experienced developer will compile, minus the occasional typing error, and even if it doesn't, it's not as if I would have to rewrite a portion of the code in order to fix the compile error. In another similar incident, I was given an incomplete codebase in an interview, and asked to finish it and make necessary modifications to get it running. I started by reading through the existing code, and then after a few minutes (even before I had finished looking at the code), the interviewer told me that's enough. When I asked him what he would have done (i.e. "what did I do wrong"), he told me that he would have started by immediately getting the code to compile. Why is that even relevant? In my opinion and in my experience, whether or not a piece of code compiles is essentially random, involving things like whether or not semicolons are missing, and has little to do with the correctness of the underlying program. (To me, focusing on compiling is like running an article through a spell-check without proofreading to check the grammar.) If you give me a piece of incomplete code, the first thing I do will be to read it. I won't even try to compile it until I know what the code is doing and I know the algorithm is correct. Anyway, these have been just a couple recent incidents, but in general I've heard many developers talk about compiling their code as they go along, and yet nobody has been able to tell me the benefit of doing so. I understand the benefits of testing your code as you go along, but why compiling? So my question is this: Is there something I missed? Is there actually a benefit to compiling as you go along? Or is this some sort of myth propagated by the software community that you must compile your code frequently?
Is there actually a benefit to compiling as you go along? There is. It gives you a shorter feedback loop - which in general, when designing (UI, writing software, visual design etc...) is a good thing. A short feedback loop means you can quickly fix errors early on, before they become more expensive to fix. To borrow your example, say you were coding in a C-like language and forgot a } somewhere in the middle of the program. If you compile just after you finish writing the statement, you can be quite certain you have just introduced the compilation error and can fix it there and then, within seconds. If you don't, however, you would have to spend a good amount of time reading the code, looking for the exact position that the } is and making sure, once you have located the error that the fix is indeed what was intended. This would take place a while after you left that bit of code. It wouldn't be as crystal clear as during the moment you wrote it. Now, yes, the end result is the same, but you wasted a good amount of time on syntactical issues that the compiler is there to help you with - an amount of time that could be significantly shorter if you compiled as you went along.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/245763", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/134375/" ] }
245,767
I am programming in Java, and I always make converters sort of like this: public OtherObject MyObject2OtherObject(MyObject mo){ ... Do the conversion return otherObject; } At the new workplace the pattern is: public void MyObject2OtherObject(MyObject mo, OtherObject oo){ ... Do the conversion } For me it is a little bit smelly, as I got used to not to change the incoming parameters. Is this incoming parameter alteration an antipattern or is it OK? Does it have some serious drawbacks?
It's not an antipattern, it's a bad practice. The difference between an antipattern and a mere bad practice is here: anti-pattern definition . The new workplace style you show is a bad practice , vestigial or pre-OOP times, according to Uncle Bob's Clean Code. Arguments are most naturally interpreted as inputs to a function. Anything that forces you to check the function signature is equivalent to a double-take. It’s a cognitive break and should be avoided. In the days before object oriented programming it was sometimes necessary to have output arguments. However, much of the need for output arguments disappears in OO languages
{ "source": [ "https://softwareengineering.stackexchange.com/questions/245767", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/138451/" ] }
245,950
There is a source file in a rather large project with several functions that are extremely performance-sensitive (called millions of times per second). In fact, the previous maintainer decided to write 12 copies of a function each differing very slightly, in order to save the time that would be spent checking the conditionals in a single function. Unfortunately, this means the code is a PITA to maintain. I would like to remove all the duplicate code and write just one template. However, the language, Java, does not support templates, and I'm not sure that generics are suitable for this. My current plan is to write instead a file that generates the 12 copies of the function (a one-use-only template expander, practically). I would of course provide copious explanation for why the file must be generated programmatically. My concern is that this would lead to future maintainers' confusion, and perhaps introduce nasty bugs if they forget to regenerate the file after modifying it, or (even worse) if they modify instead the programmatically-generated file. Unfortunately, short of rewriting the whole thing in C++, I see no way to fix this. Do the benefits of this approach outweigh the disadvantages? Should I instead: Take the performance hit and use a single, maintainable function. Add explanations for why the function must be duplicated 12 times, and graciously take the maintenance burden. Attempt to use generics as templates (they probably don't work that way). Yell at the old maintainer for making code so performance-dependent on a single function. Other method to maintain performance and maintainability? P.S. Due to the poor design of the project, profiling the function is rather tricky... however, the former maintainer has convinced me that the performance hit is unacceptable. I assume by this he means more than 5%, though that is a complete guess on my part. Perhaps I should elaborate a bit. The 12 copies do a very similar task, but have minute differences. The differences are in various places throughout the function, so unfortunately there are many, many, conditional statements. There are effectively 6 "modes" of operation, and 2 "paradigms" of operation (words made up by myself). To use the function, one specifies the "mode" and "paradigm" of operation. This is never dynamic; each piece of code uses exactly one mode and paradigm. All 12 mode-paradigm pairs are used somewhere in the application. The functions are aptly named func1 to func12, with even numbers representing the second paradigm and odd numbers representing the first paradigm. I'm aware that this is just about the worst design ever if maintainability is the goal. But it seems to be "fast enough", and this code hasn't needed any changes for a while... It's also worth noting that the original function has not been deleted (although it is dead code as far as I can tell), so refactoring would be simple.
This is a very bad situation, you need to refactor this ASAP - this is technical debt in it's worst - you don't even know how important the code really is - only speculate that it's important. As to solutions ASAP: Something that can be done is adding a custom compilation step. If you use Maven that is actually fairly simple to do, other automated build systems are likely to cope with this as well. Write a file with a different extension than .java and add a custom step that searches your source for files like that and regenerates the actual .java. You may also want to add a huge disclaimer on the auto-generated file explaining not to modify it. Pros vs using a once-generated file: Your developers will not get their changes to the .java working. If they actually run the code on their machine before committing they will find that their changes have no effect (hah). And then maybe they will read the disclaimer. You are absolutely right in not trusting your teammates and your future self with remembering that this particular file has to be changed in a different way. It also allows automatic testing as wel, as JUnit will compile your program before running tests (and regenerate the file as well) EDIT Judging by the comments the answer came off as if this is a way to make this work indefinitely and maybe OK to deploy to other performance critical parts of your project. Simply put: it is not. The extra burden of creating your own mini-language, writing a code-generator for it and maintaining it, not to mention teaching it to future maintainers is hellish in the long run. The above only allows a safer way to handle the problem while you are working on a long-term solution . What that will take is above me.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/245950", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/138608/" ] }
246,070
According to this post , we should never rely on the finalize method to be called. So why did Java include it in the programming language at all? It seems like a terrible decision to include in any programming language a function that might get called.
Finalizers are important for the management of native resources. For example, your object might need to allocate a WidgetHandle from the operating system using a non-Java API. If you don't release that WidgetHandle when your object is GC'd, you're going to be leaking WidgetHandles. What's important is that the "finalizer is never called" cases break down rather simply: The program shuts down quickly The object "lives forever" during the lifetime of the program The computer turns off / your process is killed by the OS / etc In all three of these cases, you either don't have a native leak (by virtue of the fact that your program is not running anymore), or you already have a non-native leak (if you keep allocating managed objects without them being GC'd). The "don't rely on the finalizer being called" warning is really about not using finalizers for program logic. For example, you don't want to keep track of how many of your objects exist across all instances of your program by incrementing a counter in a file somewhere during construction and decrementing it in a finalizer -- because there's no guarantee that your objects will be finalized, this file counter will probably not ever go back to 0. This is really a special case of the more general principle that you shouldn't depend on your program terminating normally (power failures, etc). For management of native resources, though, the cases where the finalizer doesn't run correspond to cases where you don't care if it doesn't run.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/246070", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/133064/" ] }
246,167
I'm trying to understand why the output file sizes are significantly different when using a C and a C++ compiler. I was writing a small hello world program in C and C++, I noticed that in C version, the size of the executable was 93.7KB and in C++, the size of the same hello world program was 1.33MB. I am not sure why that is. I think it may be because C++ has more libraries and namespaces to use so I removed the using namespace std line and simply used std::cout and still the result was the same. C #include <stdio.h> int main() { printf("hello world"); return 0; } // size 93.7KB C++ #include <iostream> int main() { std::cout<<"Hello world"; return 0; } // size 1.33MB There doesn't seem to be much difference in the code above. Is there some sort of compiler difference that creates the differing file sizes?
Most of the C++ standard library, including all the streams which cout is part of, are inline template classes. Whenever you #include one of these inline library components, the compiler will copy and paste all that code to the source file that is including it. This will help the code to run faster, but will also add a lot of bytes to the final executable. This is likely the reason for the results you got. Doing a similar test with the clang compiler on OSX (Apple LLVM version 5.1), using default flags, I got comparable results: hello_cpp_cout: #include <iostream> int main() { std::cout << "Hello world" << std::endl; return 0; } Size: 14,924 bytes hello_c: #include <stdio.h> int main() { printf("hello world\n"); return 0; } Size: 8,456 bytes And, as a bonus, I tried to compile a .cpp file with the exact same code as hello_c , i.e.: using printf instead of cout : hello_cpp_printf: #include <stdio.h> int main() { printf("hello world\n"); return 0; } Size: 8,464 bytes As you can see, the executable size is hardly related to the language, but to the libraries you include in your project. Update: As it was noted by several comments and other replies, the choice of compiler flags will also be influential in the size of the compiled executable. A program compiled with debug flags will be a lot larger than one compiled with release flags, for example.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/246167", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/138830/" ] }
246,211
Suppose there are n lines for a hotline. Whenever a customer calls the hotline, the call is forwarded to one of the n lines. And I want to assign percentage of calling to each of the n lines. Suppose there are two lines and one line is assigned 60% and other is 40%, the total number of calls is 10 so the first line would receive 6 calls and second will get 4 calls. I know the percentage of calling to each line in advance but the problem is that I don't know the number of calls that would be received in a day. How can I distribute the number of calls without knowing the total calls?
Do some bookkeeping about the already taken calls and calculate their distribution over the n lines. This gives you n percentage values (your already achieved distribution), which can be compared to the n percentages you want to achieve. Whenever a new call comes in, assign that call to the line with the highest deviation from the target value (note that as long as you don't hit the given distribution exactly, there is always a line which has too few calls so far, when compared to the target distribution). For example: after assigning the first call to line 1: total calls line1 total calls line2 perc.line 1 perc. line 2 1 0 100% 0% *above 60% *below 40% <- next call to 2 1 1 50% 50% * below 60%: *above40% next to line1 2 1 66% 33% *above 60% *below 40% <- next to line 2 2 2 50% 50% * below 60%: *above40% next to line1 3 2 60% 40% * both hit the mark: next call arbitrary 4 2 66% 33% *above 60% *below 40% <- next to line 2 4 3 57.1% 42.85% *below 60% *above 40% <- next to line 1 ... EDIT: This approach could be further improved by not using the absolute difference, but choosing the line which minimizes the sum-of-squares of all deviations. That would also give you a better result in case you reach the target values exactly.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/246211", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/138876/" ] }
246,277
Lately I've been in the habit of "masking" Java collections with human-friendly class names. Some simple examples: // Facade class that makes code more readable and understandable. public class WidgetCache extends Map<String, Widget> { } Or: // If you saw a ArrayList<ArrayList<?>> being passed around in the code, would you // run away screaming, or would you actually understand what it is and what // it represents? public class Changelist extends ArrayList<ArrayList<SomePOJO>> { } A colleague pointed out to me that this is bad practice, and introduces lag/latency, as well as being an OO anti-pattern. I can understand it introducing a very tiny degree of performance overhead, but can't imagine it's at all significant. So I ask: is this good or bad to do, and why?
Lag/Latency? I call BS on that. There should be exactly zero overhead from this practice. ( Edit: It has been pointed out in the comments that this can, in fact, inhibit optimizations performed by the HotSpot VM. I don't know enough about VM implementation to confirm or deny this. I was basing my comment off of the C++ implementation of virtual functions.) There is some code overhead. You have to create all the constructors from the base class that you want, forwarding their parameters. I also don't see it as an anti-pattern, per se. However, I do see it as a missed opportunity. Instead of creating a class that derives the base class just for the sake of renaming, how about you instead create a class that contains the collection and offers a case-specific, improved interface? Should your widget cache really offer the full interface of a map? Or should it instead offer a specialized interface? Furthermore, in the case of collections, the pattern simply doesn't work together with the general rule of using interfaces, not implementations - that is, in plain collection code, you would create a HashMap<String, Widget> , and then assign it to a variable of type Map<String, Widget> . Your WidgetCache cannot extend Map<String, Widget> , because that's an interface. It cannot be an interface that extends the base interface, because HashMap<String, Widget> doesn't implement that interface, and neither does any other standard collection. And while you can make it a class that extends HashMap<String, Widget> , you then have to declare the variables as WidgetCache or Map<String, Widget> , and the first loses you the flexibility to substitute a different collection (maybe some ORM's lazy loading collection), while the second kind of defeats the point of having the class. Some of these counterpoints also apply to my proposed specialized class. These are all points to consider. It may or may not be the right choice. In either case, your colleague's offered arguments are not valid. If he thinks it's an anti-pattern, he should name it.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/246277", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/37136/" ] }
246,380
I am working on a project in MVC that has mobile application so one thing is clear that we have to use Web API so it can used in mobile application. After creating API when we started to develop Web site we are confused and had discussion on whether to use API or directly access to the Business object. And we ended up after having opinion form more experienced developer to consume Web API instead of using Business object directly. I'm having confusion regarding to this solution structure. 1) Why we should use Web API and make HTTP request (which is time consuming) to get or put data instead of business object directly which is in same solution. 2) After having arguments they said what if client wants to host API and web on different cloud server and apply scaling only on API or may be he want to have different url for accessing API and Web (which is some what logical). So in that case should we call Web API from MVC application in same solution? 3) If we're hosting API and Web on different hosting so it means our Web will use WebClient and have HTTP call on each navigation. Is it right? 4) If we'll business object form both API and Web hosting on different server then if something change in BL will need to update build on both server. 5) Or we should create only one project for API and can add views or html pages to develop Web interface so in that way we can directly call API from ajax. As per my knowledge #5 is the best solution or API is only for 3rd party access. If we have DB, EF, data layer and business layer in same solution then we should not use API to make HTTP calls and directly access business object. (correct me if I'm wrong)API is needed when mobile application or desktop or any one want to access application so we can have same repository and data layer. In my scenario I've to create API as we also have mobile application, and in project API side we called business layer (separate project) and business layer communicate to data access layer (separate project). So my question is if we host our API and web to different servers then calling API which is a HTTP request may take longer rather than using method from business layer as we create the project and we've .dll of business layer. In API controller we just convert out put of our business to json format. I've searched on internet but didn't get convincing answer. I've found a blog http://odetocode.com/blogs/scott/archive/2013/07/01/on-the-coexistence-of-asp-net-mvc-and-webapi.aspx discussing same point but again in that blog my question is why we need to consider scenario #3? Update: We can have different API project and MVC project and we can call API from web using jvascript or can use MVVM pattern.
Great question! I'm always looking for a better way to structure my projects.. Each point you raise has merit and having explored a variety of solution structures I have to say that I agree the majority of the comments here: there is no perfect solution. A few things to ask yourself when faced with this kind of problem: How complex is this application? With how many systems will I need to integrate -- or how many systems will need to integrate with this system? How much testing do I plan on doing? Is there a separate design/UI team? Will we need to scale? What constitutes a session? Let's look at a couple of scenarios and ways to use a little clever engineering to make things really bang (and some tricks to make things a bit easier).. Hosting Both API and Website in the Same Project In this case, you may have a single solution with zero or more business layer projects and a single hybrid MVC/WebAPI project (as well as other projects - utility, etc). Pro's Everything is in one place.. No need to shoe-horn in complicated messaging (HttpClient calls), you can have shared session state (client and server via cookies, InProc/OutOfProc session, etc), connection pooling, shared logic, etc. Deployment could not be more simple. Con's Everything is in one place.. This is probably the most monolithic structure possible. There are no clearly defined interfaces between your layers.. You end up with high-cohesion . Lazy developers will avoid interfaces when dealing with this type of architecture which makes testing a huge pain. Scaling/co-locating the application will be difficult. Uses I would use this project structure for a one-off, internal, or simple application. Building a quick system for tracking basketball camp sign-up at the local Y? This is your architecture! WebAPI and Website in Different Projects I tend to prefer this case.. You have a single solution with one (or more) MVC project(s) and one WebAPI project. Pro's Modularization! Loose Coupling! Each project can stand alone, be tested separately, and can be managed differently. This allows you to more easily implement different caching strategies depending on your needs. By keeping solid boundaries between your different systems, you can more easily establish contracts which allow you to enforce specific usage patterns and cut-down on possible friction (read: fewer bugs with less opportunity to abuse the API). Scaling is a bit easier as you only need to scale the bits that are seeing high load. Integration becomes a bit easier to handle as well because you will need to have an idea about what your API is going to look like from the start. Con's Maintenance is a bit more difficult. Multiple projects means you will need project/feature owners to keep track of merges, contracts (interfaces), deployments, etc. Code upkeep, technical debt , error tracking, state management -- all become concerns as they might need to be implemented differently based upon your needs. These kinds of applications also require the most planning and curating as they grow. Uses Building an application that could have 100 users today and 100,000 next week/month? Does the application have to send notifications, manage complex workflows, and have multiple interfaces (web + mobile app + SharePoint)? Have lots of time on your hands and love solving 5000+ piece puzzles over the weekend? This is the architecture for you! Tips Having outlined the above, I can understand how your next project might look a bit daunting. No worries, here are a few tricks I've learned over the years.. Try to use stateless sessions. On smaller systems, this might mean storing an encrypted cookie containing at least the current user's internal id and a timeout. Larger systems might mean storing an encrypted cookie with a simple session id which might be fetched from a datastore (redis, table storage, DHT , etc).. If you can store enough information so that you don't have to hit the main database on every request then you will be in a good place - but try to keep cookies under 1k. Be aware that there will probably be more than one model. Try to think in terms of models and projections (the links I found here were.. not good.. think: one man's inventory item is another man's order line item - same basic underlying structure, but different views). Some projects have a different model for each logical/conceptual boundary (i.e. using a specific model for communcation with a specific API. API's Everywhere! Anytime an object/class/structure exposes any data or behavior, you are establishing an API. Be mindful of how other entities or dependencies will be using this API. Think about how you might test this API. Consider what might be talking to this API (other objects via code? Other systems via a protocol?) and how that data is exposed (strongly typed? JSON? * cough * XML?). Build for what you have, not what you imagine that you will have two years from now. Another answer references YAGNI - they're absolutely correct! Solving imaginary problems makes your deadline imaginary. Set solid goals for your iterations and meet them. Deploy! A project in development is a project with only one user - you! YMMV (Your Mileage May Vary). There is only one absolute here: there is a problem, you are building a solution. Everything else is completely up in the air. Both solutions above can be made a wild success -- and a sucking failure. It is all up to you, your tools, and how you use them. Tread lightly, fellow developer!
{ "source": [ "https://softwareengineering.stackexchange.com/questions/246380", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/139047/" ] }
246,718
Abstract: So, as I understand it (although I have a very limited understanding), there are three dimensions that we (usually) work with physically: The 1st would be represented by a line. The 2nd would be represented by a square. The 3rd would be represented by a cube. Simple enough until we get to the 4th -- It is kinda hard to draw in a 3D space, if you know what I mean... Some people say that it has something to do with time . The Question: Now, though that doesn't all make much sense, that is all great with me. My question isn't about this, or I'd be asking it on MathSO or PhysicsSO. My question is: How does the computer handle this with arrays? I know that you can create 4D, 5D, 6D, etc... arrays in many different programming languages, but I want to know how that works.
Fortunately, programs aren't limited by the physical constraints of the real world. Arrays aren't stored in physical space, so the number of dimensions of the array doesn't matter. They are flattened out into linear memory. For example, a single dimensional array with two elements might be laid out as: (0) (1) A 2x2 dimensional array might then be: (0,0) (0,1) (1,0) (1,1) A three dimensional 2x2x2 array might be: (0,0,0) (0,0,1) (0,1,0) (0,1,1) (1,0,0) (1,0,1) (1,1,0) (1,1,1) You can hopefully see where this is going. Four dimensions might be: (0,0,0,0) (0,0,0,1) (0,0,1,0) (0,0,1,1) (0,1,0,0) (0,1,0,1) (0,1,1,0) (0,1,1,1) (1,0,0,0) (1,0,0,1) (1,0,1,0) (1,0,1,1) (1,1,0,0) (1,1,0,1) (1,1,1,0) (1,1,1,1)
{ "source": [ "https://softwareengineering.stackexchange.com/questions/246718", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/139421/" ] }