source_id
int64
1
4.64M
question
stringlengths
0
28.4k
response
stringlengths
0
28.8k
metadata
dict
171,201
I'm about to start writing a process for saving some data structure from code in to a file of some proprietary, as-yet-undefined type. However, I've never designed a file type or structure before. Are there any things, generally speaking, that I should consider before starting my design? Are there any accepted good practices here? Bad practices I should avoid? Any absolute do's and don'ts?
First, try to find a format that is close enough to what you are about to build. In general, it is better to use someone's format than to invent your own, even if the format appears to be slightly more complex than what you need 1 . If you cannot find a suitable ready-made format, see if you can build your own on top of an existing general-purpose format, such as XML or Binary XML . This should be possible in nearly all cases when you are about to start a new file format. Text-based XML takes more space, but gives humans some measure of readability. However, if you find yourself using Base-64 encoding inside an XML file, that's a clear indication that you should have used a binary encoding instead. As far as good and bad practices go, make sure that you do not "bake in" the hardware feature of your initial target platform into the design of your file format. Specifically, make sure that your numbers are stored in a format that can be read correctly on platforms with endianness that is different from that of the writer, and that your user-facing strings are stored in UNICODE. Another good practice is to include a header from which it is possible to determine the type of your file in case its extension is missing or incorrect. It is a good idea to include a version of your file format in the header. This would let you change the format later, and stay backward-compatible. If possible, do not make your format dependent on the specifics of the default serialization mechanism built into your platform. For example, binary-serialized Java objects do not make a good file format 2 . Finally, decide if your files need to be streamable . This introduces additional complexity, because one should be able to interpret individual "frames" of your file in isolation. In cases when you need streamability, however, you should almost always be able to locate a suitable file format that already exists. 1 On the other hand, you should avoid formats that require extraordinary efforts to support the complexity that your application requires. 2 This does not mean, however, that you should not attempt to custom-integrate reading and writing of your new format with the serialization scheme of your platform, only that you should not rely on the default mechanisms of serialization.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/171201", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/31673/" ] }
171,203
I've seen questions (mainly on Stack Overflow), which lack this basic knowledge. The point of this question is to provide good information for those seeking it, and those referencing to it. In the context of web programming, what are the differences between Server-side programming and Client-side programming? Which languages belong to which, and when do you use each of them?
Background Web development is all about communication. In this case, communication between two (2) parties, over the HTTP protocol: The Server - This party is responsible for serving pages. The Client - This party requests pages from the Server , and displays them to the user. In most cases, the client is a web browser . The User - The user uses the Client in order to surf the web, fill in forms, watch videos online, etc. Each side's programming, refers to code which runs at the specific machine, the server's or the client's. Basic Example The User opens his web browser (the Client ). The User browses to http://google.com . The Client (on the behalf of the User ), sends a request to http://google.com (the Server ), for their home page. The Server then acknowledges the request, and replies the client with some meta-data (called headers ), followed by the page's source. The Client then receives the page's source, and renders it into a human viewable website. The User types Stack Overflow into the search bar, and presses Enter The Client submits that data to the Server . The Server processes that data, and replies with a page matching the search results. The Client , once again, renders that page for the User to view. Programming Server-side Programming Server-side programming, is the general name for the kinds of programs which are run on the Server . Uses Process user input. Compiles pages. Structure web applications. Interact with permanent storage (SQL, files). Example Languages PHP Python ASP.Net in C#, C++, or Visual Basic. Nearly any language (C++, C#, Java). These were not designed specifically for the task, but are now often used for application-level web services. Client-side programming Much like the server-side, Client-side programming is the name for all of the programs which are run on the Client . Uses Make interactive webpages. Make stuff happen dynamically on the web page. Interact with temporary storage, and local storage (Cookies, localStorage). Send requests to the server, and retrieve data from it. Provide a remote service for client-side applications, such as software registration, content delivery, or remote multi-player gaming. Example languages JavaScript (primarily) HTML* CSS* Any language running on a client device that interacts with a remote service is a client-side language. *HTML and CSS aren't really "programming languages" per-se. They are markup syntax by which the Client renders the page for the User .
{ "source": [ "https://softwareengineering.stackexchange.com/questions/171203", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/38223/" ] }
171,211
I need to create a web service that executes every hour. It will be used to review data in a database and add alerts to a table in the same database if certain conditions are met/not met. What we currently have is: We have end devices that use Python to report to an Amazon Web Services (AWS) virtual server. The AWS server takes that information and stores it in a MySQL database. The AWS server is Linux running Django and Apache. I need to be able to have some python code run every hour that verifies the data that has been stored by the end devices. If certain conditions are not met then a record will be added to the alerts table in the database. We originally contracted to have the above setup created. I am new to Python, Django, and Apache. However, I have already made several changes to the Python code that sends and also receives the data from the end devices. I am a coder that is breaking into web programming. Does anyone have any recommendations on how I can do this?
How about making a cronjob , assuming you have shell access? The cron daemon exists on virtually any UNIX-like system and schedules commands to run based on a description in a file called the crontab . Each line of the file contains a set of fields to indicate the timepoints when a command shall be executed. Your task could be either a standalone program that does the task you wish to accomplish or as another answer suggests, an invocation of a HTTP client like wget , curl or fetch to access a web resource that will perform the action. If you've got limits for how long a request may take to serve, you might have to move the task into an offline script or program that doesn't run inside your web framework/server.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/171211", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/54239/" ] }
171,216
In java (and many other programming language), there are often structure to deal with graphic element : Colour, Shape, etc. Those are most often in a UI toolkit and thus have a relatively strong coupling with UI element. Now, in the domain of my application, we often deal with colour, shape, etc, to display statistic information on an element. Right now all we do with it is display/save those element with little or no behaviour. Would it make sense to avoid "reinventing the wheel" and directly use the structures in java.awt.* or should I make my own element and avoid a coupling to this toolkit? Its not like those element are going away anytime soon (they are part of the core java library after all), but at the same time it feel weird to import java.awt.* server side. I have no problem using java.util.List everywhere. Should I feel different about those class? What would be the "recommended" practice in that case?
How about making a cronjob , assuming you have shell access? The cron daemon exists on virtually any UNIX-like system and schedules commands to run based on a description in a file called the crontab . Each line of the file contains a set of fields to indicate the timepoints when a command shall be executed. Your task could be either a standalone program that does the task you wish to accomplish or as another answer suggests, an invocation of a HTTP client like wget , curl or fetch to access a web resource that will perform the action. If you've got limits for how long a request may take to serve, you might have to move the task into an offline script or program that doesn't run inside your web framework/server.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/171216", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/69624/" ] }
171,253
Part 1 Clearly Immutability minimizes the need for locks in multi-processor programming, but does it eliminate that need, or are there instances where immutability alone is not enough? It seems to me that you can only defer processing and encapsulate state so long before most programs have to actually DO something (update a data store, produce a report, throw an exception, etc.). Can such actions always be done without locks? Does the mere action of throwing out each object and creating a new one instead of changing the original (a crude view of immutability) provide absolute protection from inter-process contention, or are there corner cases which still require locking? I know a lot of functional programmers and mathematicians like to talk about "no side effects" but in the "real world" everything has a side effect, even if it's the time it takes to execute a machine instruction. I'm interested in both the theoretical/academic answer and the practical/real-world answer. If immutability is safe, given certain bounds or assumptions, I want to know what the borders of the "safety zone" are exactly. Some examples of possible boundaries: I/O Exceptions/errors Interactions with programs written in other languages Interactions with other machines (physical, virtual, or theoretical) Special thanks to @JimmaHoffa for his comment which started this question! Part 2 Multi-processor programming is often used as an optimization technique - to make some code run faster. When is it faster to use locks vs. immutable objects? Given the limits set out in Amdahl's Law , when can you achieve better over-all performance (with or without the garbage collector taken into account) with immutable objects vs. locking mutable ones? Summary I'm combining these two questions into one to try to get at where the bounding box is for Immutability as a solution to threading problems.
This is an oddly phrased question that is really, really broad if answered fully. I'm going to focus on clearing up some of the specifics that you're asking about. Immutability is a design trade off. It makes some operations harder (modifying state in large objects quickly, building objects piecemeal, keeping a running state, etc.) in favor of others (easier debugging, easier reasoning about program behavior, not having to worry about things changing underneath you when working concurrently, etc.). It's this last one we care about with this question, but I want to emphasize that it is a tool. A good tool that often solves more problems than it causes (in most modern programs), but not a silver bullet... Not something that changes the intrinsic behavior of programs. Now, what does it get you? Immutability gets you one thing: you can read the immutable object freely, without worrying about its state changing underneath you (assuming it is truly deeply immutable... Having an immutable object with mutable members is usually a deal breaker). That's it. It frees you from having to manage concurrency (via locks, snapshots, data partitioning or other mechanisms; the original question's focus on locks is... Incorrect given the scope of the question). It turns out though that lots of things read objects. IO does, but IO itself tends to not handle concurrent use itself well. Almost all processing does, but other objects may be mutable, or the processing itself might use state that is not friendly to concurrency. Copying an object is a big hidden trouble point in some languages since a full copy is (almost) never an atomic operation. This is where immutable objects help you. As for performance, it depends on your app. Locks are (usually) heavy. Other concurrency management mechanisms are faster but have a high impact on your design. In general , a highly concurrent design that makes use of immutable objects (and avoids their weaknesses) will perform better than a highly concurrent design that locks mutable objects. If your program is lightly concurrent then it depends and/or doesn't matter. But performance should not be your highest concern. Writing concurrent programs is hard . Debugging concurrent programs is hard . Immutable objects help improve your program's quality by eliminating opportunities for error implementing concurrency management manually. They make debugging easier because you're not trying to track state in a concurrent program. They make your design simpler and thus remove bugs there. So to sum up: immutability helps but will not eliminate challenges needed to handle concurrency properly. That help tends to be pervasive, but the biggest gains are from a quality perspective rather than performance. And no, immutability does not magically excuse you from managing concurrency in your app, sorry.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/171253", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/62323/" ] }
171,296
This is not about @staticmethod and @classmethod ! I know how staticmethod works. What I want to know is the proper use cases for @staticmethod vs. a module-level function. I've googled this question, and it seems there's some general agreement that module-level functions are preferred over static methods because it's more pythonic. Static methods have the advantage of being bound to its class, which may make sense if only that class uses it. However, in Python functionality is usually organized by module not class, so usually making it a module function makes sense too. Static methods can also be overridden by subclasses, which is an advantage or disadvantage depending on how you look at it. Although, static methods are usually "functionally pure" so overriding it may not be smart, but it may be convenient sometimes (though this may be one of those "convenient, but NEVER DO IT" kind of things only experience can teach you). Are there any general rule-of-thumbs for using either staticmethod or module-level functions? What concrete advantages or disadvantages do they have (e.g. future extension, external extension, readability)? If possible, also provide a case example.
Technically a static method and a module function behave pretty much identically - In both cases they work like standard functions, the difference being the namespace where they are placed. So the decision is more one of maintainability/readability. Generally I would use a static method if some or all of these criteria are met: There is already an existing class with normal methods The function relates to objects of the class generally, but not one specific instance You want to be able to call the method as if it's a non-static method, possibly because you may want to make the method non-static in the future without breaking client code
{ "source": [ "https://softwareengineering.stackexchange.com/questions/171296", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/69944/" ] }
171,332
Is there a reason why the source code of software mentioned in research papers is not released? I understand that research papers are more about the general idea of accomplishing something than implementation details, but I don't get why they don't release the code. For example, this paper ends with: Results The human line drawing system is implemented through the Qt framework in C++ using OpenGL, and runs on a 2.00 GHz Intel dual core processor workstation without any additional hardware assistance. We can interactively draw lines while the system synthesizes the new path and texture. Do they keep the source code closed intentionally because of a monetization they intend to make with it, or because of copyright ?
Several reasons come to mind. Code is too big for article. For a short period of time, interesting projects were short enough to be published with the paper that described them. This can still happen, but many projects of sufficiently large size to be interesting have grown too big to be published with the papers that describe them. Public hosts not free or durable. Until recently, cheap, durable, easy to access public hosts were not available. Publishing a paper is easier than publishing a project. Some people have time to publish a paper or a project, but not both. Incentives tied to role. Many years ago I asked a colleague about product development and patents and got the word that most people there pretty much did one or the other. As with paper writers (think academia) and open source developers, rewards are geared toward one work product or the other. Self motivation. The desire to describe ideas or to implement code is not always present in equal parts in the same person. Many of my professors openly admitted that they either never coded very much, or were many years away from having coded fluently. Similarly, many developers barely want to write comments in their code or when they commit to source control. Durability of project hosting and work product is also an issue. Who wants to link somewhere that might be gone a few years from now and as a result, diminish the value of the paper. Tradition. Publishers are oriented toward reviewing and publishing papers, but might not be ready to take on the same evaluation for projects. Also the traditional views on what is a sensible level of reproducibility varies among fields. A chemist publishing a paper about a new synthesis method is expected to write down enough detail for another chemist to perform the synthesis. She'd not be expected to ship the educts and product to the journal. Readers who want to use/reproduce the paper are expected to buy their own educts and do the synthesis themselves in their lab (though they may ask to come and visit the lab to see how it is done in practice). Neither would a biologist be expected to attach his new transgenic mice to the paper. This view on reproducibility corresponds to e.g. giving a (pseudo-code) description of the algorithm as opposed to shipping the actual implementation. Naked code can be shocking . It takes a lot less polishing to proof-read a paper length document than to code inspect, code review, and quality assure a project. I have a lot of code I would be more comfortable telling you about than showing you. Hopefully things are moving forward to a point where we will all write beautiful code, but if your code was rushed, barely or doesn't completely work, you might be more comfortable not sharing the executables or the source. Closed source. Not everyone has embraced open source. Many papers are written about work for DoD, commercial projects, or privately funded projects where there are benefits from exposure of the project to the public, but there are still trade secrets or first to market advantages that could be eroded by open sourcing the code or other work products. Publish further work based on this code. If the code is not published it may give the author an advantage in publishing followup work. Other competing researchers may need to reimplement the work which may take precious time.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/171332", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/59263/" ] }
171,457
What is the point of using DTO and is it an out dated concept? I use POJO s in the view layer to transfer and persist data. Can these POJOs be considered as an alternative to DTOs?
DTO is a pattern and it is implementation (POJO/POCO) independent. DTO says, since each call to any remote interface is expensive, response to each call should bring as much data as possible. So, if multiple requests are required to bring data for a particular task, data to be brought can be combined in a DTO so that only one request can bring all the required data. Catalog of Patterns of Enterprise Application Architecture has more details. DTO's are a fundamental concept, not outdated.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/171457", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/17887/" ] }
171,481
I’m having some problems debugging an encoded javacscript. This script I’m referring to given in this link over here . The encoding here is simple and it works by shifting the unicodes values to whatever Codekey was use during encoding. The code that does the decoding is given here in plain English below:- <script language="javascript"> function dF(s){ var s1=unescape(s.substr(0,s.length-1)); var t=''; for(i=0;i<s1.length;i++)t+=String.fromCharCode(s1.charCodeAt(i)-s.substr(s.length-1,1)); document.write(unescape(t)); } </script> I’m interested in knowing or understanding the values (e.g s1,t). Like for example when the value of i=0 what values would the following attributes / method would hold s1.charCodeAt(i) and s.substr(s.length-1,1) The reason I’m doing this is to understand as to how a CodeKey function really works. I don’t see anything in the code above which tells it to decode on the basis of codekey value. The only thing I can point in the encoding text is the last character which is set to 1 , 2 ,3 or 4 depending upon the codekey selected during encoding process. One can verify using the link I have given above. However, to debug, I’m using firebug addon with the script running as localhost on my wamp server. I’m able to put a breakpoint on the js using firebug but I’m unable to retrieve any of the user defined parameters or functions I mentioned above. I want to know under this context what would be best way to debug this encoded js. EDIT @blueberryfields Thanks for the neat code review. However, to clarify this is no homework its something i picked from a website about encoding and javascript.The material just looked interesting and decided to give it a go. I don't see the point of using the intermediate variables as I was hoping to make use of those already define (s1,t,i). Usually these variable types are seen in firebug way too often like the enumerable types. Beside, using a good breakpoint and the right place i can always step over these values in the loop. I changed my focus as someone on stackexchange told me to use dragonfly (opera) as i did I was able to retrieve the variable and their values with the breakpoint statement. For other values i just did document.write to get desired results. Here is the link of the screenshot. link . I was more interested in understanding the part of coding that actually tell the program to shift-back the unicode character based upon code key value. That part of code was s.substr(s.length-1,1) . He just extracted the last character which is the codekey number and then use it in calculating the matching charcode value. If you unescape this shift-1 code %264DTDSJQU%2631MBOHVBHF%264E%2633kbwbtdsjqu%2633%264F%261Bbmfsu%2639%2638Ifmmp%2631Xpsme%2638%263%3A%264C%261B%264D0TDSJQU%264F%261B%261%3A%261%3A%261%3A1 you would get &4DTDSJQU&31MBOHVBHF&4E&33kbwbtdsjqu&33&4F&1Bbmfsu&39&38Ifmmp&31Xpsme&38&3:&4C&1B&4D0TDSJQU&4F&1B&1:&1:&1:1 Although those last chars not required but were intentionally added so it helps in decoding.
DTO is a pattern and it is implementation (POJO/POCO) independent. DTO says, since each call to any remote interface is expensive, response to each call should bring as much data as possible. So, if multiple requests are required to bring data for a particular task, data to be brought can be combined in a DTO so that only one request can bring all the required data. Catalog of Patterns of Enterprise Application Architecture has more details. DTO's are a fundamental concept, not outdated.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/171481", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/70066/" ] }
171,529
I've been studying about recursive functions, and apparently, they're functions that call themselves, and don't use iterations/loops (otherwise it wouldn't be a recursive function). However, while surfing the web for examples (the 8-queens-recursive problem), I found this function: private boolean placeQueen(int rows, int queens, int n) { boolean result = false; if (row < n) { while ((queens[row] < n - 1) && !result) { queens[row]++; if (verify(row,queens,n)) { ok = placeQueen(row + 1,queens,n); } } if (!result) { queens[row] = -1; } }else{ result = true; } return result; } There is a while loop involved. ... so I'm a bit lost now. Can I use loops or not?
You misunderstood recursion: although it can be used to replace iteration, there is absolutely no requirement for the recursive function not to have iterations internal to itself. The only requirement for a function to be considered recursive is the existence of a code path through which it calls itself, directly or indirectly. All correct recursive functions also have a conditional of some sort, preventing them from "recursing down" forever. Your recursive function is ideal to illustrate the structure of recursive search with backtracking. It starts with the check of the exit condition row < n , and proceeds to making search decisions on its level of recursion (i.e. picking a possible position for queen number row ). After each iteration, a recursive call is made to build upon the configuration that the function has found so far; eventually, it "bottoms out" when row reaches n in the recursive call that is n levels deep.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/171529", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/13833/" ] }
171,536
Possible Duplicate: I’m a Subversion geek, why should I consider or not consider Mercurial or Git or any other DVCS? Every once in a while, you hear someone saying that distributed version control (Git, HG) is inherently better than centralized version control (like SVN) because merging is difficult and painful in SVN. The thing is, I've never had any trouble with merging in SVN, and since you only ever hear that claim being made by DVCS advocates, and not by actual SVN users, it tends to remind me of those obnoxious commercials on TV where they try to sell you something you don't need by having bumbling actors pretend that the thing you already have and works just fine is incredibly difficult to use. And the use case that's invariably brought up is re-merging a branch, which again reminds me of those strawman product advertisements; if you know what you're doing, you shouldn't (and shouldn't ever have to) re-merge a branch in the first place. (Of course it's difficult to do when you're doing something fundamentally wrong and silly!) So, discounting the ridiculous strawman use case, what is there in SVN merging that is inherently more difficult than merging in a DVCS system?
if you know what you're doing, you shouldn't (and shouldn't ever have to) re-merge a branch in the first place. (Of course it's difficult to do when you're doing something fundamentally wrong and silly!) And therein lies the source of your confusion and the whole problem in general. You say that merging branches is "fundamentally wrong and silly". Well, that's exactly the problem: you're thinking of branches as things that shouldn't be merged. Why? Because you're an SVN user who knows that merging branches is hard . Therefore, you never do it, and you encourage others to not do it. You have been trained to avoid merging; you've developed techniques that you use to avoid merging. I'm a Mercurial user. Even on my own projects, where I'm the only developer, I merge branches all the time . I have a release branch, which I put a fix into. Well, I merge that back into the main-line so that the fix goes there. If I were using SVN, I would adopt a completely different structure of the codebase. Why? Because SVN makes merges hard, and therefore you develop idioms and techniques to avoid doing complex merges. DVCS's make complex merges easy because they are the default state . Everything is a branch, more or less, in a DVCS. So the entire structure of them is built from the ground up to make merging easier. This allows you to develop a workflow that uses merging on a daily basis, rather than the SVN workflow where you never use merging. The simple fact is this: you should approach a DVCS in a different way than SVN. You should use the proper idioms for these very different kinds of version control systems. In SVN, you adopt idioms that don't involve merging because merges are hard. In DVCS's, you adopt idioms that frequently use merges because they're no big deal. Right tool for the right job. The thing is, the merge-focused workflow is a lot nicer and easier to use than the SVN-style workflow where you don't merge things. It's easier to see when something from the release branch was brought into the dev branch. It's easier to see the various interplay between branches. It's easy to create test branches for things, then clip them off if the test doesn't work. And so on. Really, Joel explains this a lot better than I can . You should have a good read of that.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/171536", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/935/" ] }
171,671
I recently found a framework named ecto . In this framework, a basic component named "plasm" , which is the ecto Directed Acyclic Graph.In ecto, plasm can be operated by ecto scheduler. I am wondering what's the advantage of this mechanism, and in what other situations can we exploit the concept of DAG?
Nice Question. Code may be represented by a DAG describing the inputs and outputs of each of the arithmetic operations performed within the code; this representation allows the compiler to perform common subexpression elimination efficiently. Most Source Control Management Systems implement the revisions as a DAG. Several Programming languages describe systems of values that are related to each other by a directed acyclic graph. When one value changes, its successors are recalculated; each value is evaluated as a function of its predecessors in the DAG. DAG are handy in detecting deadlocks as they illustrate the dependencies amongst a set of processes and resources. In many randomized algorithms in computational geometry, the algorithm maintains a history DAG representing features of some geometric construction that have been replaced by later finer-scale features; point location queries may be answered, as for the above two data structures, by following paths in this DAG. Once we have the DAG in memory, we can write algorithms to calculate the maximum execution time of the entire set. While programming spreadsheet systems, the dependency graph that connects one cell to another if the first cell stores a formula that uses the value in the second cell must be a directed acyclic graph. Cycles of dependencies are disallowed because they cause the cells involved in the cycle to not have a well-defined value. Additionally, requiring the dependencies to be acyclic allows a topological order to be used to schedule the recalculations of cell values when the spreadsheet is changed. Using DAG we can write algorithms to evaluate the computations in the correct order. EDIT : Ordering of formula cell evaluation when recomputing formula values in spreadsheets can be done using DAGs Git uses DAGs for content storage, reference pointers for heads, object model representation, and remote protocol. DAGs is used at Trace scheduling: the first practical approach for global scheduling, trace scheduling tries to optimize the control flow path that is executed most often. Ecto is a processing framework and it uses DAG to model processing graphs so that the graphs do ordered synchronous execution. Plasm in Ecto is the DAG and Scheduler operates on it. DAGs is used at software pipelining, which is a technique used to optimize loops, in a manner that parallels hardware pipelining. Good Resources : http://www.biomedcentral.com/1471-2288/8/70 http://www.ncbi.nlm.nih.gov/pubmed/12453109 http://www.ericsink.com/vcbe/html/directed_acyclic_graphs.html http://xlinux.nist.gov/dads/HTML/directAcycGraph.html
{ "source": [ "https://softwareengineering.stackexchange.com/questions/171671", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/63024/" ] }
171,710
I am trying to decode a piece of code from a book: List<Person> people = new List<Person>() { new Person {FirstName="Homer",LastName="Simpson",Age=47}, new Person {FirstName="Marge",LastName="Simpson",Age=45} }; Person is just a simple class they made, with a bunch of fields: Name, Last Name, etc... What I don't understand is, don't we send parameters to a constructor of Person in non-curly brackets? I tried replicating this code, but it doesn't seem to fly, any takers? Thanks for input.
C# allows you to specify property parameters in curly braces when the object is initialized. This allows you to pick and choose which items to initialize and which to leave as defaults. A constructor, on the other hand, runs one single block of code with a fixed set of parameters. So to get the same effect you'd have to create multiple constructors all with the various combinations of properties you might want to initialize, which could be tedious. var x = new Person {FirstName="Homer",LastName="Simpson",Age=47}; is exactly equivalent to this: var x = new Person(); x.FirstName="Homer"; x.LastName="Simpson"; x.Age=47; Except that it's shorter and arguably easier on the eyes. It also allows for constructs like you demonstrated in your question, which would be very tedious if you had to create temporary variables and initialize them out as I did here before adding them to the list. (Which is how you used to have to do it.) All without requiring an explicitly-defined constructor that takes your desired list of parameters, which may or may not be available. Also, note that while a constructor can initialize properties with a private setter, this technique (as should be obvious from the provided example) will only work if you have a public setter for the property. Also note that my shortened example implicitly called the default (parameterless) constructor, which would therefore have to be present.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/171710", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/65973/" ] }
171,734
Could someone please explain quite clearly the difference between a port and a socket. I know that a port serves as a door into the network for an application process and that the application process uses a socket connection to the given port number to handle network communication but when you have multiple processes listening on a single port number, I am finding it difficult to understand the difference between the socket and the port and how they all fit together.
S is a server program: let's say it's an HTTP server, so it'll use the well-known port number for HTTP , which is 80. I run it on a host with IP address 10.0.0.4 , so it will listen for connections on 10.0.0.4:80 (because that's where everyone will expect to find it). Inside S , I'm going to create a socket and bind it to that address: now, the OS knows that connections coming into 10.0.0.4:80 should be routed to my S process via that particular socket. netstat output once socket is bound: $ netstat --tcp -lan Active Internet connections (servers and established) Proto Recv-Q Send-Q Local Address Foreign Address State tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN NB. the local address is all zeroes because S doesn't care how its clients reach it Once S has this socket bound, it will accept connections - each time a new client connects, accept returns a new socket, which is specific to that client netstat output once a connection is accepted: $ netstat --tcp -lan Active Internet connections (servers and established) Proto Recv-Q Send-Q Local Address Foreign Address State tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN tcp 0 0 10.0.0.4:80 10.0.0.5:55715 ESTABLISHED 10.0.0.4:80 represents S 's end of the connection, and is associated with the socket returned by accept 10.0.0.5:55715 is the client's end of the connection, and is associated with the socket the client passed to connect . The client's port isn't used for anything except routing packets on this TCP connection to the right process: it's assigned randomly by the client's kernel from the ephemeral port range. Now, S can happily go on accepting more client connections ... each one will get its own socket, each socket will be associated with a unique TCP connection, and each connection will have a unique remote address. S will track client state (if there is any) by associating it with the socket. So, roughly: the IP address is for routing between hosts on the network the port is for routing to the correct socket on the host I nearly said correct process , but it's actually possible to have multiple (usually child) processes all accepting on the same socket ... however, each time one of the concurrent accept calls returns, it does so in only one process, each incoming connection's socket is unique to one instance of the server the socket is the object a process uses to talk to the OS about a particular connection, much like a file descriptor as mentioned in comments, there are plenty of other uses for sockets that don't use ports at all: for example socketpair creates a pair of sockets connected together that have no addressing scheme at all - the only way to use that pipe is by being the process which called socketpair , being a child of that process and inheriting one, or being explicitly passed one of the sockets from that process
{ "source": [ "https://softwareengineering.stackexchange.com/questions/171734", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/55045/" ] }
172,879
Where it is accepted that a language has to be Turing complete to be any good, is it actually possible to have a 'useful' programming language that isn't Turing complete? I should clarify that this is quite specifically about 'programming' languages in the traditional sense, and not markup or query languages.
Coq , Agda , HOL and ACL2 are very useful and extremely powerful languages, although they're not Turing-complete. A common feature that renders them non-Turing-complete is the fact that it is always possible to prove termination. A very simple limitation is enough: recursive calls are only allowed on provably structurally smaller terms. Therefore while it is not possible to implement an interpreter for a Turing-complete language or even for the language itself many other useful things are still possible, like a certified C compiler .
{ "source": [ "https://softwareengineering.stackexchange.com/questions/172879", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/69733/" ] }
173,039
I hear about modern C++ popularity and some talks about migrating back to C++ from C# or other C-like languages. I know about C++11 features but I would like to hear your experiences, especially from developers who migrated from C# to C++. More importantly, does Microsoft push developers to use C++? If yes, why?
Yes, your suspicions are correct. Microsoft is pushing C++ to come back and become more popular. I can't find it now, but a while ago I saw a presentation by one of Microsoft big guys and the whole thing was geared towards developers and was about roll out of Windows 8 and especially WinRT (the replacement for .NET framework as well as Win32 API). He had a timeline that he explained how various pressures were affecting what technology was popular at certain times. So at first people wanted speed so they all coded in C/C++ (two separate languages). As the hardware got faster, the focus moved away from speed of execution and more towards speed of development, so higher level languages became much more popular. However, now the focus is becoming more towards mobile and ARM-based computers (Windows 8 is first Windows release to be compiled for ARM) and many believe they will become much more popular and for some will completely replace the desktop. So the focus (at least in Microsoft's eyes) is back on C++ because now we care about battery life. Higher level code = more instructions = more juice required. To support this transition back to C++, they've introduced a completely new Windows 8 programming API, called WinRT (last I checked, that was the name anyway). This API follows the theme of .NET Framework in the scope of functionality it provides but it will be available to anyone coding in C++ (via COM interfaces), in C# or even in Javascript for those that wish to write HTML 5/Javascript apps. They are also bringing XAML (technology used in WPF, their newest UI framework) to be available in C++ as well. So to me that kind of indicates that there's definitely more focus on C++ at Microsoft than there was in the past. UPDATE #1: Since I just got a 'nice answer' badge for this, I thought maybe I should come back and a) clarify few things and b) make the fact-checking police happy because as we all know on technology forums anything inaccurate could result in wars that last for years. WinRT is not a replacement for .NET framework, but it is yet another alternative that MS Windows developers now have and MS is strongly pushing people to go in that direction. It appears (please hold your flames if this is not 100% accurate) that WinRT was primarily targeted for Modern UI apps although regular desktop apps should be able to take advantage of it as well. Having said this, MS is strongly pushing for people to switch to writing a) modern UI apps and b) start using WinRT so as the balance shifts percentage of people using .NET framework will most likely go down. C++ will NEVER replace higher level languages such as C# or python. Just like those languages will NEVER replace C++. This was probably the most controversial part of OP's question. But it is all about the balance and the facts are that: C++ community (with MS being large part of it) is pushing for a strong comeback to position C++ as a good language for low-powered devices, whose market share has been going up like crazy lately. If you do not believe me, search for "GoingNative" series of talks that began last year. With all the effort and influence from Microsoft, C++ usage will definitely go up, while C# might drop some what. This is what MS is pushing for and as I said in the comments above, when MS puts their capital behind an idea, they do shift large portion of the industry. I will probably get a response from some guy who will argue, "what industry, I've always been on Linux" and to that my only response is, wake up! Yes, there are other OSs out there but majority of desktop market, both consumer and business at the moment is Windows and any serious developer who wishes to maximize the value of his time would be very silly not to target that chunk of the desktop market. So in conclusion: Yes, MS is pushing for C++ to come back so most likely its popularity will increase. No, C++ will never replace C#. Update #2: I don't know why but technical community tends to see things in very absolute black/white terms when the reality is full of shades of gray. This is a response to several new comments that were added to this post: .NET framework will not go away any time soon (or ever). Just about every technology that windows had since 90's is still around in some form or fashion. So for those that are so attached to .NET framework: a) don't worry about it disappearing and b) stop arguing in its favor as if your life depended on it, your API is safe. WinRT does reimplement a lot of functionality that in the past was provided by Win32 and .NET framework APIs. People who want that functionality will have a choice if they want to use WinRT, .NET framework, or continue with Win32 API (that's not dead either). If WinRT doesn't support easy creation of web applications today, there's a very good chance it will support them in the future. The position that Microsoft announced is that WinRT is a large framework which gave Microsoft a chance to start with a clean slate and built an API using lessons learned in Win32 API and .NET framework itself. I did try looking for that video, and still can't find it, but one of the things the speaker mentioned is that there are certain areas of .NET framework which could have been defined better/simpler/cleaner and WinRT exposes that same functionality in that new cleaner interface.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/173039", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/37408/" ] }
173,086
Are flag variables evil? Are the following kind of variables profoundly immoral and is it wicked to use them? "boolean or integer variables that you assign a value in certain places then down below you check then in orther to do something or not, like, for example using newItem = true then some lines below if (newItem ) then " I remember doing a couple of projects where I totally neglected using flags and ended up with better architecture/code; however, it is a common practice in other projects I work at, and when code grows and flags are added, IMHO code-spaghetti also grows. Would you say there are any cases where using flags is a good practice or even necessary?, or would you agree that using flags in code are... red flags and should be avoided/refactored; me, I just get by with doing functions/methods that check for states in real time instead.
The issue I have seen when maintaining code that makes use of flags is that the number of states grows quickly, and there are almost always unhandled states. One example from my own experience: I was working on some code that had these three flags bool capturing, processing, sending; These three created eight states (actually, there were two other flags as well). Not all the possible value combinations were covered by the code, and users were seeing bugs: if(capturing && sending){ // we must be processing as well ... } It turned out there were situations where the assumption in the if statement above was false. Flags tend to compound over time, and they hide the actual state of a class. That is why they should be avoided.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/173086", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/10869/" ] }
173,118
The often provocative Chuck Moore (inventor of the Forth language) gave the following advice [1] : Use comments sparingly! (I bet that's welcome.) Remember that program you looked through - the one with all the comments? How helpful were all those comments? How soon did you quit reading them? Programs are self-documenting, even assembler programs, with a modicum of help from mnemonics. It does no good to say: LA B . Load A with B In fact it does positive bad: if I see comments like that I'll quit reading them - and miss the helpful ones. What comments should say is what the program is doing. I have to figure out how it's doing it from the instructions anyway. A comment like this is welcome: COMMENT SEARCH FOR DAMAGED SHIPMENTS Should comments say why the program is doing what it is doing? In addition to the answers below, these two Programmers posts provide additional insight: Beginner's guide to writing comments? An answer to Why would a company develop an atmosphere which discourage code comments? References 1. Programming a problem-oriented-language , end of section 2.4. Charles H. Moore. Written ~June 1970.
Should comments say WHY the program is doing what it is doing? Unequivocally yes. There don't necessarily need to be many comments, mind you, but if you have them, WHY is the only question worth answering outside of a few bizarre fringe scenarios. The reasoning is simple. If I read your code, good or bad, I can see what the program is doing. I have no idea why . HOW seldom has anything I don't know. WHY is frequently based on history, weird portions of the problem domain, or hacks around third party dependencies.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/173118", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/62203/" ] }
173,297
Given that GitHub provides GUI apps for both Mac and Windows , what are the benefits of learning to use git from the command line? Currently I'm using their mac app to update my repositories, and so far it seems to cover my needs. What might I be missing out on?
I think this question is just a special case of "Why should I learn any CLI for which a GUI alternative exist?". I suspect the latter question is about as old as GUIs, and I assume there were many attempts to answer it over the years. I could try to bumble my way through my own answer to this question, but Neal Stephenson articulated what I agree with as the 'ultimate answer' more than ten years ago in his remarkable essay In the Beginning... Was the Command Line . While the essay touches on many aspects of computing, and while even Stephenson himself thinks that a lot of it is now obsolete, the essay explains in what ways CLIs are better GUIs in an extremely compelling manner that literally changed my life. It's a long read (~40 pages), but I can't recommend it enough to anyone who asks questions like you asked here. Finally, though I'd answer any CLI vs GUI sort of question in similar vein, I think my answer holds especially true to your specific question since of all computer things you chose to ask about git . git is arguably the latest tool in a not-so-long list of computer tools that are truly worthy of the hole-hawg metaphor as described in Stephenson's essay. git , like several other Unix-ish things, is a reason to know CLIs all in itself. Sometimes in spite of its erratic 'porcelain' ; sometimes because of it. So yes, you can definitely be productive with github's GUI, either for OSX or even just on their website. Yes, it's actually quite sleek, I use the features of the site often. But no, you will never have that Godly feeling as your right pinky hangs above an insane git filter-branch command for an aeon or two. If I had to keep just one thing from my experience with computing - the mental challenges, the close friendships formed in a datcenter at 2AM, the infinite ladder of competence to climb, touching users' lives and reigning over PBs of precious data, the cushy jobs and comfortable life - keep just one thing - it'd be that Godly feeling.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/173297", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/69426/" ] }
173,441
In the last few years anonymous functions (AKA lambda functions) have become a very popular language construct and almost every major / mainstream programming language has introduced them or is planned to introduce them in an upcoming revision of the standard. Yet, anonymous functions are a very old and very well-known concept in Mathematics and Computer Science (invented by the mathematician Alonzo Church around 1936, and used by the Lisp programming language since 1958, see e.g. here ). So why didn't today's mainstream programming languages (many of which originated 15 to 20 years ago) support lambda functions from the very beginning and only introduced them later? And what triggered the massive adoption of anonymous functions in the last few years? Is there some specific event, new requirement or programming technique that started this phenomenon? IMPORTANT NOTE The focus of this question is the introduction of anonymous functions in modern, main-stream (and therefore, maybe with a few exceptions, non functional) languages. Also, note that anonymous functions (blocks) are present in Smalltalk, which is not a functional language, and that normal named functions have been present even in procedural languages like C and Pascal for a long time. Please do not overgeneralize your answers by speaking about "the adoption of the functional paradigm and its benefits", because this is not the topic of the question.
There's certainly a noticeable trend towards functional programming, or at least certain aspects of it. Some of the popular languages that at some point adopted anonymous functions are C++ ( C++11 ), PHP ( PHP 5.3.0 ), C# ( C# v2.0 ), Delphi (since 2009), Objective C ( blocks ) while Java 8 will bring support for lambdas to the language . And there are popular languages that are generally not considered functional but supported anonymous functions from the start, or at least early on, the shining example being JavaScript. As with all trends, trying to look for a single event that sparked them is probably a waste of time, it's usually a combination of factors, most of which aren't quantifiable. Practical Common Lisp , published in 2005, may have played an important role in bringing new attention to Lisp as a practical language, as for quite some time Lisp was mostly a language you'd meet in an academic setting, or very specific niche markets. JavaScript's popularity may have also played an important role in bringing new attention to anonymous functions, as munificent explains in his answer . Other than the adoption of functional concepts from multi-purpose languages, there's also a noticeable shift towards functional (or mostly functional) languages. Languages like Erlang (1986), Haskell (1990), OCaml (1996), Scala (2003), F# (2005), Clojure (2007), and even domain specific languages like R (1993) seem to have gained a strong following strongly after they were introduced. The general trend has brought new attention to older functional languages, like Scheme (1975), and obviously Common Lisp. I think the single more important event is the adoption of functional programming in the industry. I have absolutely no idea why that didn't use to be the case, but it seems to me that at some point during the early and mid 90s functional programming started to find it's place in the industry, starting (perhaps) with Erlang's proliferation in telecommunications and Haskell's adoption in aerospace and hardware design . Joel Spolsky has written a very interesting blog post, The Perils of JavaSchools , where he argues against the (then) trend of universities to favour Java over other, perhaps more difficult to learn languages. Although the blog post has little to do with functional programming, it identifies a key issue: Therein lies the debate. Years of whinging by lazy CS undergrads like me, combined with complaints from industry about how few CS majors are graduating from American universities, have taken a toll, and in the last decade a large number of otherwise perfectly good schools have gone 100% Java. It's hip, the recruiters who use "grep" to evaluate resumes seem to like it, and, best of all, there's nothing hard enough about Java to really weed out the programmers without the part of the brain that does pointers or recursion, so the drop-out rates are lower, and the computer science departments have more students, and bigger budgets, and all is well. I still remember how much I hated Lisp, when I first met her during my college years. It's definitely a harsh mistress, and it's not a language where you can be immediately productive (well, at least I couldn't). Compared to Lisp, Haskell (for example) is a lot friendlier, you can be productive without that much effort and without feeling like a complete idiot, and that might also be an important factor in the shift towards functional programming. All in all, this is a good thing. Several multi-purpose languages are adopting concepts of paradigm that might have seemed arcane to most of their users before, and the gap between the main paradigms is narrowing. Related questions: Functional Programming on the rise? Why isn't functional programming more popular in the industry? Does it catch on now? Why hasn't functional programming taken over yet? Further reading: Why are functional programming languages, especially Common Lisp and Lisp dialect Clojure, making a comeback in popularity? Functional Programming: What caused the rise in popularity of functional programming? Can Your Programming Language Do This? Why functional programming? Why Haskell?
{ "source": [ "https://softwareengineering.stackexchange.com/questions/173441", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/29020/" ] }
173,478
It seems that more source control systems still use files as the means of storing the version data. Vault and TFS use Sql Server as their data store, which I would think would be better for data consistency as well as speed. So why is it that SVN, I believe GIT, CVS, etc still use the file system as essentially a database, (I ask this question as we had our SVN server just corrupt itself during a normal commit) instead of using actual database software (MSSQL, Oracle, Postgre, etc)? EDIT: I think another way of asking my question is "why do VCS developers roll their own structured data storage system instead of using an exisiting one?"
TL;DR: Few version control systems use a database because it isn't necessary. As a question for a question answer, why wouldn't they? What benefits do "real" database systems offer over a file system in this context? Consider that revision control is mostly keeping track of a little metadata and a lot of text diffs. Text is not stored in databases more efficiently, and indexability of the contents isn't going to be a factor. Lets presume that Git (for argument's sake) used a BDB or SQLite DB for its back-end to store data. What would be more reliable about that? Anything that could corrupt simple files can also corrupt the database (since that's also a simple file with a more complex encoding). From the programmer paradigm of not optimizing unless its necessary, if the revision control system is fast enough and works reliably enough, why change the entire design to use a more complex system?
{ "source": [ "https://softwareengineering.stackexchange.com/questions/173478", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/24081/" ] }
173,518
Recently I have started to wrap my head around OOP, and I am now to the point where the more I read about the differences between abstract classes and interfaces the more confused I become. So far, neither can be instantiated. Interfaces are more or less structural blueprints that determine the skeleton and abstracts are different by being able to partially implement code. I would like to learn more about these through my specific situation. Here is a link to my first question if you would like a little more background information: What is a good design model for my new class? Here are two classes I created: class Ad { $title; $description $price; function get_data($website){ } function validate_price(){ } } class calendar_event { $title; $description $start_date; function get_data($website){ //guts } function validate_dates(){ //guts } } So, as you can see these classes are almost identical. Not shown here, but there are other functions, like get_zip() , save_to_database() that are common across my classes. I have also added other classes Cars and Pets which have all the common methods and of course properties specific to those classes (mileage, weight, for example). Now I have violated the DRY principle and I am managing and changing the same code across multiple files. I intend on having more classes like boats, horses, or whatever. So is this where I would use an interface or an abstract class? From what I understand about abstract classes, I would use a super class as a template with all of the common elements built into the abstract class, and then add only the items specifically needed in future classes. For example: abstract class content { $title; $description function get_data($website){ } function common_function2() { } function common_function3() { } } class calendar_event extends content { $start_date; function validate_dates(){ } } Or would I use an interface and, because these are so similar, create a structure that each of the subclasses are forced to use for integrity reasons, and leave it up to the end developer who fleshes out that class to be responsible for each of the details of even the common functions. my thinking there is that some 'common' functions may need to be tweaked in the future for the needs of their specific class. Despite all that above, if you believe I am misunderstanding the what and the why of abstract classes and interfaces altogether, by all means let a valid answer to be stop thinking in this direction and suggest the proper way to move forward! Thanks!
In layman's terms: Interfaces are for "can do/can be treated as" type of relationships. Abstract ( as well as concrete ) classes are for "is a" kind of relationship. Look at these examples: class Bird extends Animal implements Flight; class Plane extends Vehicle implements Flight, AccountableAsset; class Mosquito extends Animal implements Flight; class Horse extends Animal; class RaceHorse extends Horse implements AccountableAsset; class Pegasus extends Horse implements Flight; Bird , Mosquito and Horse are Animals . They are related. They inherit common methods from Animal like eat(), metabolize() and reproduce() . Maybe they override these methods, adding a little extra to them, but they take advantage of the default behavior implemented in Animal like metabolizeGlucose(). Plane is not related to Bird , Mosquito or Horse . Flight is implemented by dissimilar, unrelated classes, like Bird and Plane . AccountableAsset is also implemented by dissimilar, unrelated classes, like Plane and RaceHorse . Horse doesn't implement Flight. As you can see classes (abstract or concrete) helps you build hierarchies , letting you inhering code from the upper levels to the lower levels of the hierarchy. In theory the lower you are in the hierarchy, the more specialized your behavior is, but you don't have to worry about a lot of things that are already taken care of. Interfaces , in the other hand, create no hierarchy, but they can help homogenize certain behaviors across hierarchies so you can abstract them from the hierarchy in certain contexts. For example you can have a program sum the value of a group of AccountableAssets regardless of their being RaceHorses or Planes .
{ "source": [ "https://softwareengineering.stackexchange.com/questions/173518", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/66662/" ] }
173,575
How would one implement a threadpool? I've been reading on wikipedia for "threadpool" but I still can't figure out what one should do to solve this question (possibly because I didn't quite understand what a threadpool is in simple terms). Can someone explain me in plain english what a threadpool is and how would one answer this question?
A thread pool is a group of pre-instantiated, idle threads which stand ready to be given work. These are preferred over instantiating new threads for each task when there is a large number of short tasks to be done rather than a small number of long ones. This prevents having to incur the overhead of creating a thread a large number of times. Implementation will vary by environment, but in simplified terms, you need the following: A way to create threads and hold them in an idle state. This can be accomplished by having each thread wait at a barrier until the pool hands it work. (This could be done with mutexes as well.) A container to store the created threads, such as a queue or any other structure that has a way to add a thread to the pool and pull one out. A standard interface or abstract class for the threads to use in doing work. This might be an abstract class called Task with an execute() method that does the work and then returns. When the thread pool is created, it will either instantiate a certain number of threads to make available or create new ones as needed depending on the needs of the implementation. When the pool is handed a Task , it takes a thread from the container (or waits for one to become available if the container is empty), hands it a Task , and meets the barrier. This causes the idle thread to resume execution, invoking the execute() method of the Task it was given. Once execution is complete, the thread hands itself back to the pool to be put into the container for re-use and then meets its barrier, putting itself to sleep until the cycle repeats.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/173575", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/42742/" ] }
173,594
Many times i have heard people saying that a particular hardware to be running a thin client web browser. But from the definition of "thin-client", doesnt all browser qualify as a thin-client? as all they do is rendering the information sent from a remote server minimizing the work at the browser end?
A thread pool is a group of pre-instantiated, idle threads which stand ready to be given work. These are preferred over instantiating new threads for each task when there is a large number of short tasks to be done rather than a small number of long ones. This prevents having to incur the overhead of creating a thread a large number of times. Implementation will vary by environment, but in simplified terms, you need the following: A way to create threads and hold them in an idle state. This can be accomplished by having each thread wait at a barrier until the pool hands it work. (This could be done with mutexes as well.) A container to store the created threads, such as a queue or any other structure that has a way to add a thread to the pool and pull one out. A standard interface or abstract class for the threads to use in doing work. This might be an abstract class called Task with an execute() method that does the work and then returns. When the thread pool is created, it will either instantiate a certain number of threads to make available or create new ones as needed depending on the needs of the implementation. When the pool is handed a Task , it takes a thread from the container (or waits for one to become available if the container is empty), hands it a Task , and meets the barrier. This causes the idle thread to resume execution, invoking the execute() method of the Task it was given. Once execution is complete, the thread hands itself back to the pool to be put into the container for re-use and then meets its barrier, putting itself to sleep until the cycle repeats.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/173594", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/71834/" ] }
173,605
I am a veteran Delphi 6 programmer transitioning to C# development. My first project is a open source library that will have a minimal user interface since it is meant to be used as a Component primarily on desktop PCs running Visual Studio. My next project is going to be a Windows 8 phone app and I intend for that platform to be my primary focus for future C# development, not the desktop. My concern is that I waste as little time as possible learning a presentation framework that will benefit or distract me from writing Windows 8 phone apps. The plethora of framework names I have already encountered include, WinForms, WPF (Windows Presentation Framework), Silverlight, Silverlight Mobile, Metro and there may be others. Given my goal outlined in the first paragraph above, which is the correct framework to study for developing Windows 8 phone apps? I read about the Portable Library Tools on this Stack Overflow thread: https://stackoverflow.com/questions/5522355/windows-phone-7-wpf-sharing-a-codebase But the reply by Simon Guindon seemed to indicate to me that it's not the best solution for writing a competitive Windows 8 phone app.
A thread pool is a group of pre-instantiated, idle threads which stand ready to be given work. These are preferred over instantiating new threads for each task when there is a large number of short tasks to be done rather than a small number of long ones. This prevents having to incur the overhead of creating a thread a large number of times. Implementation will vary by environment, but in simplified terms, you need the following: A way to create threads and hold them in an idle state. This can be accomplished by having each thread wait at a barrier until the pool hands it work. (This could be done with mutexes as well.) A container to store the created threads, such as a queue or any other structure that has a way to add a thread to the pool and pull one out. A standard interface or abstract class for the threads to use in doing work. This might be an abstract class called Task with an execute() method that does the work and then returns. When the thread pool is created, it will either instantiate a certain number of threads to make available or create new ones as needed depending on the needs of the implementation. When the pool is handed a Task , it takes a thread from the container (or waits for one to become available if the container is empty), hands it a Task , and meets the barrier. This causes the idle thread to resume execution, invoking the execute() method of the Task it was given. Once execution is complete, the thread hands itself back to the pool to be put into the container for re-use and then meets its barrier, putting itself to sleep until the cycle repeats.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/173605", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/18916/" ] }
174,751
I know that a Java web browser is possible, but is it practical? I've seen the Lobo project and must admit I am impressed, but from what I've gathered it seems that development stopped in 2009. Would a browser coded in pure Java (no WebKit java bindings of any type) be able to compete with those among the ranks of Chrome or Firefox, or would it be inherently slower, hindering the user?
The programming language is, most likely, not going to be the stumbling block. The JVM's mandatory memory management may be a disadvantage in some performance-critical parts (e.g. memory hunger; but then, Java's GC might actually be better at preventing memory leaks than anything you could roll yourself), and there are a few extra security concerns, but other than that, I see no obvious show stoppers. However. A web browser on the scale of Firefox or Chromium is a massive undertaking, and both projects have a huge body of experience behind them - Mozilla builds upon decades of browser building (and some famous failures), and Chrome/Chromium has both Google and Apple (a major force in the development of WebKit) behind it and absorbed a lot of knowledge and experience from KDE and other large rock-solid Open Source projects. Both additionally make use of dozens of battle-proven libraries, not only render engines, but all sorts of things. Vector graphics, font rendering, parsing, XML DOM Tree manipulation, networking, caching, cryptography, the list goes on and on, and you don't want to reinvent all those wheels yourself, because they are hard to do and easy to get wrong. In short, building an industry-strength web browser is pretty damn hard, and that's the reason there is only a handful of success stories in this arena. The programming language has relatively little to do with it, although C and C++ are at an advantage, both technically and socially.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/174751", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/71946/" ] }
174,789
Bigger companies usually have the problem, that it is not possible to write all programs employees want (to save time and to optimize processes) due to a lack of staff and money. Then hidden programs will be created by some people having (at least some) coding experience (or by cheap students/interns...). Under some circumstances these applications will raise in importance and spread from one user to a whole department. Then there is the critical point: Who will maintain the application, add new features, ...? And this app is critical. It IS needed. But the intern has left the company. No one knows how it works. You only have a bunch of sources and some sort of documentation. Does it make sense to try and control or forbid application development done ad-hoc outside of the IT department (with the exception of minor stuff like Excel macros)?
I used to work for a company where every app we gave them led to the question: Can we export this data to Excel? After a while, I decided I had to know why they were obsessed with Excel exports for everything. It turned out that a lot of departments had one person who was an expert in Excel and could write a useful data-analysis app in no time. These apps would spread around the department like wildfire and we, the techies, had no idea they even existed. Why didn't they come to us first? Because there was a reputation that the technical team had far too much to do and, if they did ask for it, they might (if they were lucky) get it queued up six months later. That wasn't an unfair accusation and they never asked us to support their Excel apps, so no one really thought it was a problem. When these Excel developers left, they always managed to find someone else to pick it up. You could argue that it meant we were prioritising incorrectly, that important work wasn't getting done. But I would argue that it freed up the more-highly-paid developers to do more difficult work. What can it hurt? Now I would forbid software that updates the database being written outside of the development team. And I would refuse to support apps written outside of the development team. But I wouldn't try to forbid all software from being written by the business itself, and I would happily write data-exports to empower them to do so (as long as that doesn't expose data that they shouldn't see, obviously).
{ "source": [ "https://softwareengineering.stackexchange.com/questions/174789", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/56779/" ] }
174,900
I was asked by a colleague to explain clearly the difference between ordinary development and research and development (R&D) and was unable to do it. After reading Wikipedia, I still don't have the precise answer. According to Wikipedia (slightly modified): There are two primary models: In one model, the primary function is to develop new products ; in the other model, the primary function is to discover and create new knowledge about scientific and technological topics for the purpose of uncovering and enabling development of valuable new products, processes, and services. The first model is confusing. Does it mean that development (not R&D) consists exclusively in adding new features to a product, solving bugs and doing maintenance? What if something which was previously developed as a new feature becomes a separate product? The second model is less confusing, but still, how to qualify whether something is new knowledge or existent knowledge which is just rediscovered? Later, Wikipedia adds that ordinary development is different from R&D because of its: nearly immediate profit or immediate improvement. It's still not clear enough. How to qualify "nearly immediate profit"? What if a task has an immediate profit but requires heavy research? Or if it is basic but has uncertain profit, like the enforcement of a common style over the codebase? For example, does it belong to development or R&D to: Develop an engine which abstracts the access to the database, simplifying and shortening enormously the code of other applications (existent or ones which will be written in future) which should access to the database? Establish a new service-oriented architecture for the entire organization of company resources, in order to move from a bunch of separate and autonomous applications to a set of well-organized, interconnected web services, like what is used by Amazon? Design a new communication protocol to allow faster replication of data between two data centers of the company? Conceive a new type of software testing while working on a specific product, knowing that this type of testing will improve/simplify the testing process? Prove that Functional programming is more appropriate than OOP for a specific application, based on evidence, logic and previous experience? Enhance the existent application by adding gestures on tactile screens, after doing studies and testing that shows that those gestures improve the productivity of the users by a ratio of at least 1.4 for a precise set of tasks? Find a way to strongly enhance the Power usage effectiveness (PUE) of a data center? Create a Domain-Specific Language (DSL)? In short, how could I determine whether I'm doing R&D while working on something?
Great Question. It is important to distinguish between 'Development' and 'R&D.' Point 1 R&D = experimenting with ideas/technology that may never actually become a product. Software Development = working on a product/service desired by a real customer. Point 2 R&D is all about developing new solutions for a specific problem domain. The end result of this endeavor is something that I call "research toys". To be a software product, the research toy has to be completely re-implemented. Failure to do so will result in a product that appeals to an increasingly elite and erudite user base. The problem here is that this elite and erudite user base typically has no money to spend. To be a successful, the software product must be a faithful re-implementation of the research toy, accessible and loved by the commodity user. To be truely remarkable, the software product must simultaneously appeal to the elite and erudite user. Point 3 Research implies scholarly or scientific inquiry and tends to be aimed at the greater good of an industry or society at large. Product development has different motivations and outcomes: it is driven by the potential for profit. The state of product development is healthy. The state of lighting research is not. We need a collective commitment to the greater good to answer such questions. But this is not just philanthropy; the answer would address a practical goal. Light sources that are spectrally tuned to the visual system will be more sustainable. They will use less energy by generating their output in regions of the spectrum where the visual system responds most strongly, resulting in better seeing for building users. This example reinforces the difference between research and product development. Point 4 All development of new products to be R&D. I think some of you are confusing pure, abstract science with R&D. They aren't the same. R&D can be very product oriented. Scientists may be looking for a vaccine to cure AIDs. That is a very specific task to create a product to sell and it is certainly R&D and not just guys sitting around messing about with whatever they feel like. Point 5 R&D in the technical world = finding ways to do something interesting or important, using known techniques and technology as a starting point. Software development = finding ways to do something interesting or important, using known techniques and technology as a starting point. Point 6 Virtually all software development is the D part of R&D. Some times, there are very little R in Software 'R&D'. Some times, there are pretty large R in Software 'R&D'. It depends on several measurement. For example, Managing software development for various sized companies, R&D takes on different meanings depending on the size of the company, customer base, etc. In a small software company, with only a hand full of employees, the line between R&D software and Production software is usually very small. What one day is a software R&D project, may the next day be shipping as production software to customers. As software companies grow, and they have one or more production software lines, they tend to create greater separation between R&D software projects and Production software products (for obvious reasons). This R&D gap is typically created to create greater diversification in their software products for tomorrow, while allowing the production software development to continue to produce today. This is not to say that the production software products won't get innovative new features. The production software developers are typically just as "sharp" as the R&D developers. In fact, at one company, we had an enrichment program that allowed production software developers to rotate in and out of R&D projects. This not only added fresh brain power to the R&D teams, but in many cases, the production developers came back with new ideas on producing better production level software. Point 7 D = "knowing where you want to be at the end", and R is because "at the beginning of the project, you don't know what will be required to get there" Point 8 R&D are the lucky folks who get to do anything they want without accountability. Good research/resource on this topic : http://www.econ.upf.edu/~albertbanal/Commercial%20Incentives.pdf http://www.csiic.ca/pdf/Gegenworte.pdf http://web.cecs.pdx.edu/~york/York-CS-Summit-Beijing2006.pdf
{ "source": [ "https://softwareengineering.stackexchange.com/questions/174900", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/6605/" ] }
174,947
I would imagine the reason was fast, array like access to the character at index, but some characters won't fit into 16 bits, so it wouldn't work... So if you have to handle special cases anyways, why not just use UTF-8?
Because it used to be UCS-2 , which was a nice fixed-length 16-bits. Of course, 16bit turned out not to be enough. They retrofitted UTF-16 in on top.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/174947", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/33996/" ] }
174,971
I've programmed in both C# and VB.NET for years, but primarily in VB. I'm making a career shift toward C# and, overall, I like C# better. One issue I'm having, though, is curly brace soup. In VB, each structure keyword has a matching close keyword, for example: Namespace ... Class ... Function ... For ... Using ... If ... ... End If If ... ... End If End Using Next End Function End Class End Namespace The same code written in C# ends up very hard to read: namespace ... { class ... { function ... { for ... { using ... { if ... { ... } if ... { ... } } } // wait... what level is this? } } } Being so used to VB, I'm wondering if there's a technique employed by c-style programmers to improve readability and to ensure that your code ends up in the correct "block". The above example is relatively easy to read, but sometimes at the end of a piece of code I'll have 8 or more levels of curly braces, requiring me to scroll up several pages to figure out which brace ends the block I'm interested in.
Put your starting curly brace in the same "rank" as your ending one, like this: namespace ... { class ... { function ... { for ... { using ... { if ... { ... } if ... { ... } } } // It's the `function` level! } } }
{ "source": [ "https://softwareengineering.stackexchange.com/questions/174971", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/57863/" ] }
174,997
As I've been developing my position on how software should be developed at the company I work for, I've come to a certain conclusion that I'm not entirely sure of. It seems to me that if you are programming in C++, you should not use C style anything if it can be helped and you don't absolutely need the performance improvement. This way people are kept from doing things like pointer arithmetic or creating resources with new without any RAII, etc. If this idea was enforced, seeing a char* would possibly be a thing of the past. I'm wondering if this is a conclusion others have made? Or am I being too puritanical about this?
It is basically the way standard C++ is intended and encouraged (by the committee and the community) to be used: use C++ language idioms (mostly based on RAII , like smart pointers) don't use C language idioms until you can't avoid it (which still happen regularly, when interfacing with C interfaces) This is what we have been calling "modern C++" for almost 10 years now. But most C++ developers start only now to realize it makes code looks like there is not much need for raw pointers, writing new/delete, and other error-prone constructs. Now these constructs (C or not) are still there both for retrocompatibilty and for allowing you to write libraries to, again, free the other developers from having to play with them. Also, C should be used to interface with C libraries, for low level constructs that require C-style code. Any other case, you can avoid using C idioms when you have C++ available. For clarification: using C style doesn't improve performance (assuming you understand C++ constructs and RAII). In fact, a lot of algorithms written in C++ are faster than the same in C, just because C++ give more info to the compiler to make him optimize in calling context (I'm thinking about template algorithms/types for example). So performance is not necessarily a valid reason to use C idioms when you write C++.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/174997", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/65474/" ] }
175,020
We are doing some refactoring to a 20 years old legacy codebase, and I'm having a discussion with my colleague about the comments format in the code (plsql, java). There is no a default format for comments, but in most cases people do something like this in the comment: // date (year, year-month, yyyy-mm-dd, dd/mm/yyyy), (author id, author name, author nickname) and comment the proposed format for future and past comments that I want is: // {yyyy-mm-dd}, unique_author_company_id, comment My colleague says that we only need the comment, and must reformat all past and future comments to this format: // comment My arguments: I say for maintenance reasons, it's important to know when and who did a change (even this information is in the SCM). The code is living, and for that reason has a history. Because without the change dates it's impossible to know when a change was introduced without open the SCM tool and search in the long object history. because the author is very important, a change of authors is more credible than a change of authory Agility reasons, no need to open and navigate through the SCM tool people would be more afraid to change something that someone did 15 years ago, than something that was recently created or changed. etc. My colleague's arguments: The history is in the SCM Developers must not be aware of the history of the code directly in the code Packages gets 15k lines long and unstructured comments make these packages harder to understand What do you think is the best approach? Or do you have a better approach to solve this problem?
General Comments I am a great believer in comments are for why (not how) . When you start adding comments about how you fall into the problem that nothing is enforcing that comments be maintained in relation to the code (the why will usually not change (the why explanation may be enhanced some over time)). In the same way date/authorInfo does not gain you anything in terms of why the code was done this way; just like the how it can degenerate over time because there is no enforcement by any tools. Also the same information is already stored in the source control system (so you are duplicating effort (but in a less reliable way)). Going through the arguments: I say for maintenance reasons, it's important to know when and who did a change (even this information is in the SCM). Why. Neither of these things strike me as important to maintaining the code. If you need to talk to the author it is relatively simple to find this information from source control. The code has life for that reason had an history. History is stored in source control. Also do you trust that the comment was written by that person. How comments tend to degrade over time so this kind of history becomes unreliable. Source control systems on the other hand will maintain a very accurate history and you can accurately see when comments were added/removed. Because without the change date it's impossible to know when a change was introduced without open the SCM tool and search in the long object history. If you trust the data in a comment. One of the problems with this kind of things is that the comments become incorrect in relation to the code. Back to the correct tool for the job. The source control system will do this correctly without need for intervention from the user. If your source control system is a pain then maybe you need to either learn how to use it more appropriately (as that functionality is usually easy) or if does not support it find a better source control system. because the author is very important, a change of authorx is more credible than a change of authory All authors (apart from yourself) are equally credible. Agility reasons, no need to open an navigate the SCM tool If your source control tool are that burdensome you are wither using it incorrectly or (it is more likely) you are using the wrong set of tools to access the source control system. people would be afraid of change something that someone did 15 years ago, than someting that was receantly made ... If code has lasted 15 years then it is more likely to be more solid then code that has only lasted 6 months without needing review. Stable code tends to stay stable, buggy code tends to get more complex over time (as the reason it is buggy is the problem is not as simple as first thought). Even more reason to use source control to get information. The history is in the SCM Yes. Best reason yet. Developers must not be aware of history of the code directly in the code If I really need this information I will look it up in source control. Otherwise it is not relevant. Packages gets 15k lines long and unstructured comments this packages harder to understand Comments should be a description of why you are doing something anyway. Comments should NOT be describing how the code works (unless the algorithm is not obvious).
{ "source": [ "https://softwareengineering.stackexchange.com/questions/175020", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/72085/" ] }
175,062
Should I put unit testing stuffs in a separate repository, not in the same repository as the programming library? So I reference the programming library as submodule. But most open source projects that I have seen do not organize the projects like what I mention above. Can anyone explain which approach is better?
You should put the unit tests in the same repository because otherwise someone has to answer to the question "Where are the tests?" every time the project is handed over from one person to another. References to other repositories tend to get invalid over time when repositories are relocated and people change from one version control system to another. Just keep the tests close to the code.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/175062", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/51966/" ] }
175,070
Let's say I have a procedure that does stuff : void doStuff(initalParams) { ... } Now I discover that "doing stuff" is quite a compex operation. The procedure becomes large, I split it up into multiple smaller procedures and soon I realize that having some kind of state would be useful while doing stuff, so that I need to pass less parameters between the small procedures. So, I factor it out into its own class: class StuffDoer { private someInternalState; public Start(initalParams) { ... } // some private helper procedures here ... } And then I call it like this: new StuffDoer().Start(initialParams); or like this: new StuffDoer(initialParams).Start(); And this is what feels wrong. When using the .NET or Java API, I always never call new SomeApiClass().Start(...); , which makes me suspect that I'm doing it wrong. Sure, I could make StuffDoer's constructor private and add a static helper method: public static DoStuff(initalParams) { new StuffDoer().Start(initialParams); } But then I'd have a class whose external interface consists of only one static method, which also feels weird. Hence my question: Is there a well-established pattern for this type of classes that have only one entry point and have no "externally recognizable" state, i.e., instance state is only required during execution of that one entry point?
There's a pattern called Method Object where you factor out a single, large method with a lot of temporary variables / arguments into a separate class. You do this rather than just extracting parts of the method out into separate methods because they need access to the local state (the parameters and temporary variables) and you can't share the local state using instance variables because they would be local to that method (and the methods extracted from it) only and would go unused by the rest of the object. So instead, the method becomes its own class, the parameters and temporary variables become instance variables of this new class, and then the method gets broken up into smaller methods of the new class. The resulting class usually has just a single public instance method that performs the task the class encapsulates.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/175070", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/33843/" ] }
175,075
I saw that Java 1.2 is also known as Java 2. Do "Java 1.x" and "Java x" (for example "Java 1.6" and "Java 6") refer to the same version of Java? And if yes, why the need for this duality?
Sun Microsystems had back then a bad habit of going through numerous naming changes for its products, and to use confusing name to start with in the first place. What's in a Name, or what does "Java" Mean? Originally the term "Java" was being used to describe indiscriminately: the language , the platform , and some others happened to refer to the JVM and the Java Class Lib as just Java as well. Internal and External Number With regard to version numbers, as others pointed out it was partly because of marketing, and partly because they simply felt like Java had done such progress a major version upgrade was better suited (for Java 1.2 / Java 2, and when Java 1.5 / Java 5). So, what we call Java 2 was actually Java 1.2 to 1.4. Java 3 and Java 4 never existed, and we skipped directly to Java 5, as they tried to explain here (from this page on naming and versioning ). I was also probably a little as well to not mess with their internal numbering used in their tracking and development systems. I tend to think of it as 1.x = language version and X = language and product version. It makes sense, as normal users do not need to worry about minor increments and updates, as they encourage automatic updates, and these shouldn't introduce any changes to the language or the bytecode. Note also that it gets weirder when you look at minor versions and update numbers. For instance, we had a Java 1.3.1 and later Java 1.4.1 and 1.4.2. Updates However, there were no 1.x.y ever since, it's always been 1.x.0_UPDATE: Java 5 had updates up to 1.5.0_22 , Java 6 is - so far - up to 1.6.0_37 , Java 7 is - so far - up to 1.7.0_09 . Note also that you will also see updates refered to with either of these naming conventions: like 1.5.0_22 or 1.7.0_04 (note the padding 0s on the update version, aligning on 2 digits), but also 1.5.0u22 or 1.7.0u4 (note the padding 0s are gone, yay!). And for extra confusion and to keep up with Sun Microsystems' tradition, Oracle now drops announcements like this. Product Names Note also that not only did they get into the habit of changing the numbering, but also the names of the product offerings. So, Java 2 have J2SE (Java 2 Standard Edition), J2EE (Java 2 Enterprise Edition) and J2ME (Java 2 Mobile Edition). Since the introduction of Java 5, these conventions and acronyms have been dropped (though some have an incredibly long resilience, mostly thanks for dumb recruitment agencies who have no clue what they are talking about) in favor of Java SE, Java EE and Java ME, which are sometimes abbreviated - but shouldn't officially - to JSE, JEE (fairly common) and JME. Your read this page for more information about the current versioning system used by Sun Microsystems and now Oracle since Java 1.3.1 . (For bonus points, try to lookup the history of NetBeans now to see that it was also renamed a few times). They Don't Know Themselves They're also struggling to decide whether they should back-update existing references or not. For instance, this page about older releases uses the old numbering system for old releases, but the newer product names (Java SE 1.1, 1.2, 1.3 and 1.4 were not released with this naming convention). Maybe they're just screwing with us for fun. Who knows? What About the JVMS and JLS? They had fun with naming and numbering there too, of course. The Java Language Specification used to be released with a book "edition" versioning syste,, following product numbers. So: Java 1 had its "Java Language Specification", Java 2 had its "Java Language Specification, 2nd edition". So far, so good. Except the "Java Language Specification" was later only re-edited when language changes were introduced in Java 5 (which kind of make sense, you won't publish new book editions just for fun). As this was the 3rd time they were editing it, they called it (tada!) "Java Langauge Specification, 3rd edition". But obviously, people then started to wonder why 1.3 and 1.4 didn't have a language spec, or why Java 5 was matched with a 3rd edition of the language spec. So, Java 7 is now just calling it the "Java Language Specification, Java SE 7 Edition", though we often refer to this one as JLS7 for shorthand and to avoid RSI. They're Not the Only Ones... For additional fun, look at Microsoft Windows RT's naming fail, or Apple's naming and numbering madness with iPhones, iPads or even iPods.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/175075", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/4514/" ] }
175,309
We have been asked to add comments with start tags, end tags, description, solution etc for each change that we make to the code as part of fixing a bug / implementing a CR. My concern is, does this provide any added value? As it is, we have all the details in the Version control history, which will help us to track each and every change? But my leads are insisting on having the comments as a "good" programming practice. One of their argument is when a CR has to be de-scoped/changed, it would be cumbersome if comments are not there. Considering that the changes would be largely in between code, would it really help to add comments for each and every change we make? Shouldn't we leave it to the version control?
Use the best tool for the job. Your version control system should be the best tool for recording when bugfixes and CRs are made: it automatically records the date and who made the change; it never forgets to add a message (if you've configured it to require commit messages); it never annotates the wrong line of code or accidentally deletes a comment. And if your version control system is already doing a better job than your comments, it's silly to duplicate work by adding comments. Readability of source code is paramount. A codebase that's cluttered with comments giving the full history of every bugfix and CR made is going to not be very readable at all. But don't skip comments completely: Good comments (not slavishly documenting every start / stop / description / solution of every bugfix and CR) enhance the readability of the code. For example, for a tricky or unclear bit of code that you add to fix a bug, a comment of the form // fix ISSUE#413 telling people where to find more information in your issue tracker is an excellent idea.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/175309", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/51206/" ] }
175,542
Recently I was asked: Why is NoSQL faster than SQL? I didn't agree with the premise of the question... it's just nonsense for me personally. I can't see any performance boost by using NoSQL instead of SQL. Maybe SQL over NoSQL, yes but not in that way. Am I missing something about NoSQL?
There are many NoSQL solutions around, each one with its own strengths and weaknesses, so the following must be taken with a grain of salt. But essentially, what many NoSQL databases do is rely on denormalization and try to optimize for the denormalized case. For instance, say you are reading a blog post together with its comments in a document-oriented database. Often, the comments will be saved together with the post itself. This means that it will be faster to retrieve all of them together, as they are stored in the same place and you do not have to perform a join. Of course, you can do the same in SQL, and denormalizing is a common practice when one needs performance. It is just that many NoSQL solutions are engineered from the start to be always used this way. You then get the usual tradeoffs: for instance, adding a comment in the above example will be slower because you have to save the whole document with it. And once you have denormalized, you have to take care of preserving data integrity in your application. Moreover, in many NoSQL solutions, it is impossible to do arbitrary joins, hence arbitrary queries. Some databases, like CouchDB, require you to think ahead of the queries you will need and prepare them inside the DB. All in all, it boils down to expecting a denormalized schema and optimizing reads for that situation, and this works well for data that is not highly relational and that requires much more reads than writes.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/175542", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/12033/" ] }
175,584
A client has asked me to do a redesign of their website, an ASP.NET Webforms application that was developed by another consultant. It seemed like a relatively straightforward job, but after looking at the code, it's clear that's not the case. This application was not written well. At all. It is extremely vulnerable to SQL injection attacks, business logic is spread throughout the entire application, there is a lot of duplication, and dead end code that does nothing. On top of that, it keeps throwing exceptions that are being smothered, so site appears to run smoothly. My job is to simply update the HTML and CSS, but much of the HTML is being generated in business logic and would be a nightmare to sort out. My estimate on the redesign is longer than the client was aiming for. They are asking why so long. How can I explain to my client just how bad this code is? In their mind, the application is running great and the redesign should be a quick one-off. It's my word against the previous consultant. How can I give simple, concrete examples that a non-technical client will understand? Update Thanks for all the responses. The SQL injection attack demonstration makes sense and I will demo this in a test environment. That is just one part of many problems in this application. I was looking for ways to explain why other parts (such as html being generated in the data layer) would need to be replaced with better practices in order for the html and css update to take place. There are many good suggestions here which I'll piece together when I talk with my client.
Non-techies aren't idiots (for the most part). They can understand a technical argument if you keep it high-level enough. Pick a task you thought should be simple, and walk them through why it's not. I expected this change to be one word in one file. The most likely place to change it seemed to be here, but when I changed it there, it only worked in one place, and it broke these 7 other places. When I fixed one, it broke two more places, causing a domino effect, so a change I thought should have taken 10 minutes ended up taking 2 hours. That's just one example. There are a lot more unexpected 2 hour tasks in there.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/175584", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/72436/" ] }
175,586
I'm currently working in a geographically distributed team in a big company. Everybody is just focused on today's tasks and getting things done, however this means sometimes things have to be done the quick way, and that causes problems... you know, same old, same old. I'm bumping into code with several smells such as: big functions pointless utility functions/methods (essentially just to save writing a word), overcomplicated algorithms, extremely big files that should be broken down into different files/classes (1,500+ lines), etc. What would be the best way of improving code without making other developers feel bad/wrong about any proposed improvements?
Review and Annotate the Code DO review , preferably using a code review tool. (Otherwise e-mails, but... meh). DO justify your remarks and comments. DO NOT bitch about it. Fix the Code DO fix it : if they don't have time to do it themselves, after having discussed it with the author, or formally told some members about your desire to go about fixing some things. DO test for things you fix (preferably with unit and integration tests) DO justify yourself. DO prioritize: At first Start with a few minor things to get into the habit and show that you are careful; But then DO absolutely fix the big things first rather than hundreds of small things. DO NOT fix unclear code paths that you do not fully comprehend. DO NOT fix untested code paths that you may break unknowingly. DO NOT fix code paths blindly. Simple fixes are simple, but mistyping a symbol or having a brainfart and inverting a boolean condition happens all the time. Test your stuff. DO NOT bitch about it, again. DO NOT annoy others and preach self-righteously . DO NOT fix irrelevant or inconsequential problems (at first) . If it doesn't matter too much, you're likely to have better things to do. DO NOT sacrifice consistency . In essence, DO NOT become this annoying guy we all know who just changes whitespaces or brace styles based on a personal preference but without regard for the consistency with the rest of the codebase, and who even goes so far as to break builds while fixing minor code violations by introducing simple but deeply hidden bugs. Communicate DO contact the authors . DO discuss your changes with others (not necessarily authors). DO request review for your own changes from others; this: shows humility, invites others to jump onto the bandwagon, shows that this is a fluid process. DO establish rules and conventions with your co-workers to avoid this issue expanding and crippling the codebase over time. DO tackle the matter quickly. Ask DO ask. Rather than saying "I'm fixing this because X", ask directly "why did you do it this way, and how do you think we can improve it?" If they come up with something better, then start with that and work together on something even better. If they don't, then make a suggestion. If they come up with something better, or don't see why it was wrong, then explain and ask the second question again. Make it a Team Effort Your team is more likely to jump onto the bandwagon if they actually take part to this. If this is a large project with a somewhat aging codebase, it's likely that they also dislike the state of rot the code is in, but maybe never bothered to do something about it. If you have the power (or support of the ones who do), it can be a good idea to organize small sprints dedicated to enhancing code quality. Setting up code analyzers to detect code smells and monitoring their steady decrease while you refactor is also fairly motivational for a team. Of course, that might be a bit harder for remote teams, as it's a bit more difficult to convey enthusiasm about such things without actually working together. It pretty much all comes down to that. Semantics Also, don't confuse "politically correct" with "diplomatic" : The former is about beating around the bush to not offend people while saying something most likely to be negative. The latter is about tackling issues while not offending people because there might be some emotional context. They're very different things. Related Questions How Can I Tactfully Suggest Improvements to Others Badly Designed Code?
{ "source": [ "https://softwareengineering.stackexchange.com/questions/175586", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/10869/" ] }
175,589
My company switched from Subversion to Git about three months ago. We had weeks of advance notice prior to the switch. Since I'd never used Git before (or any other DVCS), I read Pro Git and spent a little time spinning up my own repositories and playing around, so that when we switched I'd be able to keep working with minimal pain. Now I'm the 'Git guy' by default. With a couple of exceptions, most of my team still has no idea how Git works. For example, they still think of branches as complete copies of the source code, and even go so far as to clone the repo into multiple folders (one per branch). They generally look at Git as a scary black box. Given the fundamental nature of source control in our daily work (not to mention the ridiculous amount of power Git affords us), I'm of the opinion that any dev who doesn't achieve a certain level of proficiency with it is a liability . Should I expect my team to have at least some understanding of how Git works internally, and how to use it beyond the most basic pull/merge/push operations? Or am I just making something out of nothing?
Professionalism would naturally dictate that a developer become familiar with their team's standard tools, even if they are new and unfamiliar (or even unwanted). However, a few things in your post give me pause. We had weeks of advance notice prior to the switch. Weeks? Swapping out source control is a big deal. There should have been months of notice leading up to a change like that. With a couple of exceptions, most of my team still has no idea how Git works. So, your company switched to a source control system that few, if anyone, understood at the time? Unless there is some other context, it seems like the whole move was ill thought out (the move, not the choice--I'm a huge git fan).
{ "source": [ "https://softwareengineering.stackexchange.com/questions/175589", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/11211/" ] }
175,594
Possible Duplicate: Avoid having an initialization method I want to determine when to do non-trivial initialization of a class. I see two times to do initialization: constructor and other method. I want to figure out when to use each. Choice 1: Constructor does initialization MyClass::MyClass(Data const& data) : m_data() { // does non-trivial initialization here } MyClass::~MyClass() { // cleans up here } Choice 2: Defer initialization to an initialize method MyClass::MyClass() : m_data() {} MyClass::Initialize(Data const& data) { // does non-trivial initialization here } MyClass::~MyClass() { // cleans up here } So to try and remove any subjectivity I want to figure out which is better in a couple of situations: Class that encapsulates a resource (window/font/some sort of handle) Class that composites resources to do something (a control/domain object) Data structure classes (tree/list/etc.) [Anything else you can think of] Things to analyze: Performance Ease of use by other developers How error-prone/opportunities for bugs [Anything else you can think of]
Always use the constructor unless there is a good reason not to. It's "The C++ Way" (tm). Regarding your points to consider: Constructors are always more or equally efficient as having code outside in separate init() functions. Constructors tend to be easier to use for other developers. Without looking at your source or docs, I would expect new YourClass(stuff) to work. Having to call a yourClass->init(stuff) afterwards is not enforced by the compiler and it's an easy slip up to make. As per number 2 - a lot of caveats about constructors are fleshed out by compilers for you, in terms of order of initialization etc. When you move things out of constructors you face the danger of reinventing the wheel, sometimes as a square.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/175594", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/69257/" ] }
175,655
In Python, I often hear that it is better to "beg forgiveness" (exception catching) instead of "ask permission" (type/condition checking). In regards to enforcing duck typing in Python, is this try: x = foo.bar except AttributeError: pass else: do(x) better or worse than if hasattr(foo, "bar"): do(foo.bar) else: pass in terms of performance, readability, "pythonic", or some other important factor?
It really depends on how often you think the exception is going to be thrown. Both approaches are, in my opinion, equally valid, at least in terms of readability and pythonic-ness. But if 90% of your objects do not have the attribute bar you'll notice a distinct performance difference between the two approaches: >>> import timeit >>> def askforgiveness(foo=object()): ... try: ... x = foo.bar ... except AttributeError: ... pass ... >>> def askpermission(foo=object()): ... if hasattr(foo, 'bar'): ... x = foo.bar ... >>> timeit.timeit('testfunc()', 'from __main__ import askforgiveness as testfunc') 2.9459929466247559 >>> timeit.timeit('testfunc()', 'from __main__ import askpermission as testfunc') 1.0396890640258789 But if 90% of your objects do have the attribute, the tables have been turned: >>> class Foo(object): ... bar = None ... >>> foo = Foo() >>> timeit.timeit('testfunc(foo)', 'from __main__ import askforgiveness as testfunc, foo') 0.31336188316345215 >>> timeit.timeit('testfunc(foo)', 'from __main__ import askpermission as testfunc, foo') 0.4864199161529541 So, from a performance point of view, you need to pick the approach that works best for your circumstances. In the end, some strategic use of the timeit module may be the most Pythonic thing you can do.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/175655", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/69944/" ] }
175,844
I have a codebase where the programmer tended to wrap things up in areas that don't make sense. For example, given an Error log we have you can log via ErrorLog.Log(ex, "friendly message"); He added various other means to accomplish the exact same task. E.G. SomeClass.Log(ex, "friendly message"); Which simply turns around and calls the first method. This adds levels of complexity with no added benefit. Is there an anti-pattern to describe this?
It is only worthwhile to call some bad coding habit an Antipattern if it is reasonably widespread. The rest we just call "rubbish code" ... If I was to suggest a name for this particular bad habit, it would be "Obsessive Abstraction Disorder" :-)
{ "source": [ "https://softwareengineering.stackexchange.com/questions/175844", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/11107/" ] }
175,891
On one hand there is an advice that says "Build one to throw away". Only after finishing a software system and seeing the end product we realize what went wrong in the design phase and understand how we should have really done it. On the other hand there is the "second-system effect" which says that the second system of the same kind that is designed is usually worse than the first one; there are many features that did not fit in the first project and were pushed into the second version usually leading to overly complex and overly engineered. Isn't here some contradiction between these principles? What is the correct view over the problems and where is the border between these two? I believe that these "good practices" are were firstly promoted in the seminal book The Mythical Man-Month by Fred Brooks. I know that some of these issues are solved by Agile methodologies, but deep down, the problem is still the principles still stand; for example we would not make important design changes 3 sprints before going live.
Build one to throw away comes from "not knowing what you don't know" at the start, so you learn as you go what you should have done at the start. Second System Effect comes from "now knowing what you did not know, however not knowing what you still don't know" i.e. Second system effect comes from trying to build a bigger, shinier, more complex system than the first one, without the knowledge needed at the start - sounds a lot like what happens with the first system. Therefore second system effect is not contradiction. Building a second system to the same functionality as the first is (to my knowledge) never done. The second system always has to be "better", therefore more complex, therefore substantially similar problems to the first system are expected - that should be thrown away. So build one to throw away, throw it way and build it again with no scope enlargement, and you won't have a second system problem. (This tends to be done more often on planets with purple skies, pink seas, and flying pigs.)
{ "source": [ "https://softwareengineering.stackexchange.com/questions/175891", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/57792/" ] }
175,950
I have seen various arguments against the DAO being called from the Controller class directly and also the DAO from the Model class.Infact I personally feel that if we are following the MVC pattern , the controller should not coupled with the DAO , but the Model class should invoke the DAO from within and controller should invoke the model class.Why because , we can decouple the model class apart from a webapplication and expose the functionalities for various ways like for a REST service to use our model class. If we write the DAO invocation in the controller , it would not be possible for a REST service to reuse the functionality right ? I have summarized both the approaches below. Approach #1 public class CustomerController extends HttpServlet { proctected void doPost(....) { Customer customer = new Customer("xxxxx","23",1); new CustomerDAO().save(customer); } } Approach #2 public class CustomerController extends HttpServlet { proctected void doPost(....) { Customer customer = new Customer("xxxxx","23",1); customer.save(customer); } } public class Customer { ........... private void save(Customer customer){ new CustomerDAO().save(customer); } } Note - Here is what a definition of Model is : Model: The model manages the behavior and data of the application domain, responds to requests for information about its state (usually from the view), and responds to instructions to change state (usually from the controller). In event-driven systems, the model notifies observers (usually views) when the information changes so that they can react. I would need an expert opinion on this because I find many using #1 or #2 , So which one is it ?
In my opinion, you have to distinguish between the MVC pattern and the 3-tier architecture. To sum up: 3-tier architecture: data: persisted data; service: logical part of the application; presentation: hmi, webservice... The MVC pattern takes place in the presentation tier of the above architecture (for a webapp): data: ...; service: ...; presentation: controller: intercepts the HTTP request and returns the HTTP response; model: stores data to be displayed/treated; view: organises output/display. Life cycle of a typical HTTP request: The user sends the HTTP request; The controller intercepts it; The controller calls the appropriate service; The service calls the appropriate dao, which returns some persisted data (for example); The service treats the data, and returns data to the controller; The controller stores the data in the appropriate model and calls the appropriate view; The view get instantiated with the model's data, and get returned as the HTTP response.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/175950", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/-1/" ] }
175,958
I'm currently working on a project that makes connection between different banks which send us information on which that project replies. A part of that project configures the different protocols that are used (not every bank uses the same protocol), this runs on a separate server. These processes all have unique id's which are stored in a database. But to save time and money on configurations and new processes, we want to make a generic protocol that banks can use. Because of PCI requirements we have to make a separate process for every bank we connect to. But the generic processes have only 1 unique identifier and therefor we cannot keep them apart. Giving every copy of that process a different identifier is as I see it impossible because they run entirely separate. So how do I keep my generic process unique?
In my opinion, you have to distinguish between the MVC pattern and the 3-tier architecture. To sum up: 3-tier architecture: data: persisted data; service: logical part of the application; presentation: hmi, webservice... The MVC pattern takes place in the presentation tier of the above architecture (for a webapp): data: ...; service: ...; presentation: controller: intercepts the HTTP request and returns the HTTP response; model: stores data to be displayed/treated; view: organises output/display. Life cycle of a typical HTTP request: The user sends the HTTP request; The controller intercepts it; The controller calls the appropriate service; The service calls the appropriate dao, which returns some persisted data (for example); The service treats the data, and returns data to the controller; The controller stores the data in the appropriate model and calls the appropriate view; The view get instantiated with the model's data, and get returned as the HTTP response.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/175958", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/72731/" ] }
176,078
As my current Java projects grow bigger and bigger, I feel a likewise growing need to insert debug output in several points of my code. To enable or disable this feature appropriately, depending on the opening or closure of the test sessions, I usually put a private static final boolean DEBUG = false at the beginning of the classes my tests are inspecting, and trivially use it this way (for example): public MyClass { private static final boolean DEBUG = false; ... some code ... public void myMethod(String s) { if (DEBUG) { System.out.println(s); } } } and the like. But that doesn't bliss me out, because of course it works but there could be too many classes in which to set DEBUG to true, if you are not staring at just a couple of them. Conversely, I (like - I think - many others) wouldn't love to put the whole application in debug mode, as the amount of text being output could be overwhelming. So, is there a correct way to architecturally handle such situation or the most correct way is to use the DEBUG class member?
You want to look at a logging framework, and maybe at a logging facade framework. There are multiple logging frameworks out there, often with overlapping functionalities, so much so that over time many evolved to rely on a common API, or have come to be used through a facade framework to abstract their use and allow them to be swapped in place if needed. Frameworks Some Logging Frameworks Java Logging Framework (part of the JDK), Apache Log4J (bit old, but still going strong and actively maintained), LogBack (created to provide a more modern approach than Log4J, by one of the creators of Log4J ). Some Logging Facades SLF4J (by the creator of LogBack , but has adapters for other frameworks), Apache Commons Logging (but a bit dated now). Usage Basic Example Most of these frameworks would allow you to write something of the form (here using slf4j-api and logback-core ): package chapters.introduction; import org.slf4j.Logger; import org.slf4j.LoggerFactory; // copied from: http://www.slf4j.org/manual.html public class HelloWorld { public static void main(String[] args) { final Logger logger = LoggerFactory.getLogger(HelloWorld.class); logger.debug("Hello world, I'm a DEBUG level message"); logger.info("Hello world, I'm an INFO level message"); logger.warn("Hello world, I'm a WARNING level message"); logger.error("Hello world, I'm an ERROR level message"); } } Note the use of a the current class to create a dedicated logger, which would allow SLF4J/LogBack to format the output and indicate where the logging message came from. As noted in the SLF4J manual , a typical usage pattern in a class is usually: import org.slf4j.Logger; import org.slf4j.LoggerFactory; public class MyClass { final Logger logger = LoggerFactory.getLogger(MyCLASS.class); public void doSomething() { // some code here logger.debug("this is useful"); if (isSomeConditionTrue()) { logger.info("I entered by conditional block!"); } } } But in fact, it's even more common to declare the logger with the form: private static final Logger LOGGER = LoggerFactory.getLogger(MyClass.class); This allows the logger to be used from within static methods as well, and it is shared between all instances of the class. This is quite likely to be your preferred form. However, as noted by Brendan Long in comments, you want to be sure to understand the implications and decide accordingly (this applies to all logging frameworks following these idioms). There are other ways of instantiating loggers, for instance by using a string parameter to create a named logger: Logger logger = LoggerFactory.getLogger("MyModuleName"); Debug Levels Debug levels vary from one framework to another, but the common ones are (in order of criticality, from benign to bat-shit bad, and from probably very common to hopefully very rare): TRACE Very detailed information. Should be written to logs only. Used only to track the program's flow at checkpoints. DEBUG Detailed information. Should be written to logs only. INFO Notable runtime events. Should be immediately visible on a console, so use sparingly. WARNING Runtime oddities and recoverable errors. ERROR Other runtime errors or unexpected conditions. FATAL Severe errors causing premature termination. Blocks and Guards Now, say you have a code section where you are about to write a number of debug statements. This could quickly impact your performance, both because of the impact of the logging itself and of the generation of any parameters you might be passing to the logging method. To avoid this sort of issue, your often want to write something of the form: if (LOGGER.isDebugEnabled()) { // lots of debug logging here, or even code that // is only used in a debugging context. LOGGER.debug(" result: " + heavyComputation()); } If you hadn't used this guard before your block of debug statements, even though the messages may not be output (if, for instance, your logger is currently configured to print only things above the INFO level), the heavyComputation() method would still have been executed. Configuration Configuration is quite dependent on your logging framework, but they offer mostly the same techniques for this: programmatic configuration (at runtime, via an API - allows for runtime changes ), static declarative configuration (at start-time, usually via an XML or properties file - likely to be what you need at first ). They also offer mostly the same capabilities: configuration of the output message's format (timestamps, markers, etc...), configuration of the output levels, configuration of fine-grained filters (for instance to include/exclude packages or classes), configuration of appenders to determine where to log (to console, to file, to a web-service...) and possibly what to do with older logs (for instance, with auto rolling files). Here's a common example of a declarative configuration, using a logback.xml file. <configuration> <appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender"> <!-- encoders are assigned the type ch.qos.logback.classic.encoder.PatternLayoutEncoder by default --> <encoder> <pattern>%d{HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n</pattern> </encoder> </appender> <root level="debug"> <appender-ref ref="STDOUT" /> </root> </configuration> As mentioned, this depends on your framework and there may be other alternatives (for instance, LogBack also allows for a Groovy script to be used). The XML configuration format may also vary from one implementation to another. For more configuration examples, please refer (amongst others) to: the LogBack manual on configuration , the Log4J 2 manual on configuration . Some Historical Fun Please note that Log4J is seeing a major update at the moment, transitioning from version 1.x to 2.x . You may want to have a look at both for more historical fun or confusion, and if you pick Log4J probably prefer to go with the 2.x version. It's worth noting, as Mike Partridge mentioned in comments, that LogBack was created by a former Log4J team member. Which was created to address shortcomings of the Java Logging framework. And that the upcoming major Log4J 2.x version is itself now integrating a few features taken from LogBack. Recommendation Bottom line, stay decoupled as much as you can, play around with a few, and see what works best for you. In the end it's just a logging framework . Except if you have a very specific reason, apart from ease of use and personal preference, any of these would do rather OK so there's not point being to hung over it. Most of them can also be extended to your needs. Still, if I had to pick a combination today, I'd go with LogBack + SLF4J. But if you had asked me a few years later I'd have recommended Log4J with Apache Commons Logging, so keep an eye over your dependencies and evolve with them.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/176078", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/4514/" ] }
176,113
I work at a large company where technical people fall roughly in one of these categories: A developer on a scrum team who develops for a single product and maybe works with other teams that are closely related to the product. An architect who is more of a consultant on multiple teams (5-6) and tries to recognize commonalities between team efforts that could be abstracted into libraries (architects do not write the library code, however). This architect also attends many meetings with management and attempts to set technical direction. In my company the architect role is where most technical people move into as the next step in their career. My questions are: Do most companies work such a way that their highest paid technical people are far removed from writing code? Is this a natural tendency for a developer's career? Can a developer have it all (code AND set direction?)
Do most companies work such a way that their highest paid technical people are far removed from writing code? Most bad companies. There is a natural trend for more responsibility to involve less code writing and more focus on other aspects of software development. That said, it's very common for technical folks to lose touch with what is common/best/possible if they don't spend time actually coding. This has a disasterous effect on the company. Is this a natural tendency for a developer's career? Yes. In the end, a person can help the product a lot more by mentoring, coordinating, designing, knowing the problem domain and doing other software development tasks than they can by writing code. And in all honesty, having good leadership or design skills are far more rare (read: valuable) than code-writing skill. Can a developer have it all (code AND set direction?) Absolutely. Though you need to realise that the amount of coding will go down. You just can't do those other valuable things well if you spend 80% of the day heads down in an IDE. The other option that happens is that of the 'principal engineer' for lack of a better term. Some developers are very specialized. I worked with someone for example who wrote gigabit ethernet drivers for Linux. We needed him to do that sort of work for us, and since only a handful of people could do that job well, he made piles of cash in addition to writing code as the majority of his day. Most companies don't need that sort of specialization though. They're just plumbing data together or making yet another website/mobileapp.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/176113", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/16992/" ] }
176,153
I am a good programmer, or so I thought before. I always love to program. And I want to learn many things about programming to make me a better programmer. I studied programming for 1 year and now I am working as a programmer for almost 2 years. So in short, I have almost 3 years programming experience. Our team is composed of 5 programmers, and 4 of us are new, 1 has more than 3 year experience. We've been working for a program for almost a year now and nobody ever review my code and I was given a page to work with. We never had a code review and we are all new so we don't know what is a clean code looks like. I think programmers learn by themselves? We deployed our program to the program without thorough testing. Now it is tight and we need an approval and code review first before we make changes with the code. For the first time, someone reviews my code and he says it is a mess. I feel so sad and hurt. I really love programming and making them say something like that really hurts me. I really want to improve myself. But it seems like I'm not a genius programmer like in the movies. Can you give me advise on how to be better? Have you ever experience something criticizing your code and you feel really hurt? What do you do on those events.
The truth is that probably in 2 years when you will see your current code you will agree that it was a mess. Learning programming is a never ending process and there will always be someone who is better at it than you. So if person who said that your code is a mess is not just mean and it is not another case of "I would do it better" disease common among programmers you should ask him/her what exactly is wrong with your code and how can you improve it.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/176153", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/17244/" ] }
176,435
I read that Facebook started out in PHP, and then to gain speed, they now compile PHP as C++ code. If that's the case why don't they: Just program in c++? Surely there must be SOME errors/bugs when hitting a magic compiler button that ports PHP to c++ code , right? If this impressive converter works so nicely, why stick to PHP at all? Why not use something like Ruby or Python? Note -- I picked these two at random, but mostly because nearly everyone says coding in those languages is a "joy". So why not develop in a super great language and then hit the magic c++ compile button?
They don't. Not anymore, at least. Turns out doing it that way causes too many problems, including deployment headaches and nullifying one of the prime advantages of using a scripting language in the first place--being able to change scripts without needing to recompile--so they revamped the HipHop system into a VM architecture with a transparent JIT phase, and deprecated the C++ compiler. Interestingly enough, apparently doing it this way is also about twice as fast (as in performant) as the original C++ trans-compilation approach.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/176435", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/72245/" ] }
176,509
I often hear the term that language A is written in language B. For example, PHP has been written C , C# is written in C++ . Can someone please explain what does that mean and if it is even correct? Does that have anything to do with the compiler of interpreter used by the language? In addition what are the factors on which the choice of the implementing language is built upon?
Most programming languages fall in two categories: interpreted, and compiled languages. A compiled language is translated by a compiler into machine code , the language the CPU directly executes step by step. An interpreted language, on the other hand, uses an intermediary, an interpreter , to run the language code. The interpreter is itself another program, usually itself compiled to machine code. PHP is an interpreted language. You need a separate program to run PHP code, the computer does not run the program directly. That separate program, the PHP interpreter, is itself written in C. C# is a compiled language, but it is not compiled to machine code. Instead, it is compiled to a specialist language, byte code, to be run on a virtual machine. Java is another example of such a setup. You could see it as a hybrid between compilation and interpretation, where the virtual machine is an interpreter. The virtual machine for C# (the CLI, or Common Language Infrastructure ) is written in C++. Other examples are: Python: The Python interpreter compiles Python code to Python bytecode, then interprets the bytecode. The interpreter itself is written in C. New implementations have since been added, including one that compiles python to run on the same CLI used for C#, called IronPython , and one that runs on the Java virtual machine, Jython . To complete the circle, there is a Python version written in (a subset of) Python, PyPy . Ruby: Ruby started out as a pure interpreted language, but the most recent version switched to using bytecode. For Ruby, too, there is a project that compiles to the CLI, named IronRuby , and one for the Java VM, JRuby .
{ "source": [ "https://softwareengineering.stackexchange.com/questions/176509", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/50440/" ] }
176,523
This question is inspired by the debate in the comments on this Stack Overflow question . The Google Closure Compiler documentation states the following (emphasis added): The Closure Compiler is a tool for making JavaScript download and run faster. It is a true compiler for JavaScript. Instead of compiling from a source language to machine code, it compiles from JavaScript to better JavaScript. However, Wikipedia gives the following definition of a "compiler": A compiler is a computer program (or set of programs) that transforms source code written in a programming language (the source language) into another computer language ... A language rewriter is usually a program that translates the form of expressions without a change of language. Based on that, I would say that Google Closure is not a compiler. But the fact that Google explicitly state that it is in fact a "true compiler" makes me wonder if there's more to it. Is Google Closure really a JavaScript compiler?
The Closure Compiler is a minifier , an optimiser and a validator all-in-one. That kind of puts it in its own category, because you're correct that a compiler should at least take something that won't run in its current form and turn it into something that will (take TypeScript for an ECMAScript-based example). But do you blame Google for stretching the terminology? What else were they going to call it? Google Minifier? No, it's more than that, and there are hundreds of those out there. Google Optimiser? It's way more than that. Google Validator? No, it's way more than that too. So the choice is Call it Google Closure Foogle and introduce a whole new otherwise-meaningless word into the lexicon. Call it Google Closure Minoptivalidator, which is clearer in intent but harder to remember. Call it Google Closure Compiler, which is pretty close to the truth. It does everything you would expect a compiler to do, with only a semantic difference. And, in the end, all words are defined by their usage, to some extent. So if Google can convince people to call this a compiler, the definition of compiler changes slightly. Certainly not in any way that will cause a problem. Or, to come back to the earlier example, can you find anything significant about TypeScript that allows it to be called a "true compiler", while Google Closure Compiler should be restricted to "almost a compiler"?
{ "source": [ "https://softwareengineering.stackexchange.com/questions/176523", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/73131/" ] }
176,582
This has become a large frustration with the codebase I'm currently working in; many of our variable names are short and undescriptive. I'm the only developer left on the project, and there isn't documentation as to what most of them do, so I have to spend extra time tracking down what they represent. For example, I was reading over some code that updates the definition of an optical surface. The variables set at the start were as follows: double dR, dCV, dK, dDin, dDout, dRin, dRout dR = Convert.ToDouble(_tblAsphere.Rows[0].ItemArray.GetValue(1)); dCV = convert.ToDouble(_tblAsphere.Rows[1].ItemArray.GetValue(1)); ... and so on Maybe it's just me, but it told me essentially nothing about what they represented, which made understanding the code further down difficult. All I knew was that it was a variable parsed out specific row from a specific table, somewhere. After some searching, I found out what they meant: dR = radius dCV = curvature dK = conic constant dDin = inner aperture dDout = outer aperture dRin = inner radius dRout = outer radius I renamed them to essentially what I have up there. It lengthens some lines, but I feel like that's a fair trade off. This kind of naming scheme is used throughout a lot of the code however. I'm not sure if it's an artifact from developers who learned by working with older systems, or if there's a deeper reason behind it. Is there a good reason to name variables this way, or am I justified in updating them to more descriptive names as I come across them?
It appears that these variable names are based on the abbreviations you'd expect to find in a physics textbook working various optics problems. This is one of the situations where short variable names are often preferable to longer variable names. If you have physicists (or people that are accustomed to working the equations out by hand) that are accustomed to using common abbreviations like Rin, Rout, etc. the code will be much clearer with those abbreviations than it would be with longer variable names. It also makes it much easier to compare formulas from papers and textbooks with code to make sure that the code is actually doing the computations properly. Anyone that is familiar with optics will immediately recognize something like Rin as the inner radius (in a physics paper, the in would be rendered as a subscript), Rout as the outer radius, etc. Although they would almost certainly be able to mentally translate something like innerRadius to the more familiar nomenclature, doing so would make the code less clear to that person. It would make it more difficult to spot cases where a familiar formula had been coded incorrectly and it would make it more difficult to translate equations in code to and from the equations they would find in a paper or a textbook. If you are the only person that ever looks at this code, you never need to translate between the code and a standard optics equation, and it is unlikely that a physicist is ever going to need to look at the code in the future perhaps it does make sense to refactor because the benefit of the abbreviations no longer outweighs the cost. If this was new development, however, it would almost certainly make sense to use the same abbreviations in the code that you would find in the literature.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/176582", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/53697/" ] }
176,601
I've been building an android game in my spare time. It's using the libgdx library so quite a bit of the heavy lifting is done for me. While developing, I carelessly selected datatypes for some procedures. I used a hashtable because I wanted something close to an associative array. Human readable key values. In other places to achieve similar things, I use a vector. I know libgdx has vector2 and vector3 classes, but I've never used them. When I come across weird problems and search Stack Overflow for help, I see a lot of people just reaming the questions that use a certain datatype when another one is technically "proper." Like using an ArrayList because it does not require defined bounds versus re-defining an int[] with new known boundaries. Or even something trivial like this: for(int i = 0; i < items.length; i ++) { // do something } I know it evaluates item.length on every iteration. However, I also know items will never be more than 15 to 20 items. So should I care if I evaluate items.length on every iteration? I ran some tests to see how the app performs using the method I just described versus the proper, follow the tutorial and use the exact data types suggested by the community. The results: Same thing. Average 45 fps. I opened every app on the phone and galaxy tab. No difference. So I guess my question to you is this: Is there a threshold when it no longer matters to be proper? Is it ok to say - "so long as it gets the job done, I don't care?"
You write a program to solve a problem. That problem is accompanied by a specific set of requirements for solving it. If those requirements are met, the problem is solved and the objective is achieved. That's it. Now, the reason that best practices are observed is because some requirements have to do with maintainability, testability, performance guarantees and so forth. Consequently, you have those pesky folks like me who require things like proper coding style. It doesn't take that much more effort to cross your T's and dot your I's, and it is a gesture of respect to those who have to read your code later and figure out what it does. For large systems, this kind of restraint and discipline is essential, because you have to play nice with others to get it all to work, and you have to minimize technical debt so that the project doesn't collapse under its own weight. At the opposite end of the spectrum are those one-off utilities that you write to solve a specific problem right now, utilities that you'll never use again. In those cases, style and best practices are completely irrelevant; you hack the thing together, run it, and get on with the next thing. So, as with so many things in software development, it depends.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/176601", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/73188/" ] }
176,639
I always thought that the business logic has to be in the controller and that the controller, since it is the 'middle' part, stays static and that the model/view have to be capsuled via interfaces. That way you could change the business logic without affecting anything else, program multiple Models (one for each database/type of storage) and a dozens of views (for different platforms for example). Now I read in this question that you should always put the business logic into the model and that the controller is deeply connected with the view. To me, that doesn't really make sense and implies that each time I want to have the means of supporting another database/type of storage I've to rewrite my whole model including the business logic. And if I want another view, I've to rewrite both the view and the controller. May someone explain why that is or if I went wrong somewhere?
ElYusubov's answer mostly nails it, domain logic should go into the model and application logic into the controller. Two clarifications: The term business logic is rather useless here, because it is ambiguous. Business logic is an umbrella term for all logic that business-people care about, separating it from mere technicalities like how to store stuff in a database or how to render it on a screen. Both domain logic ("a valid email address looks like...") and workflows/business processes ("when a user signs up, ask for his/her email address") are considered business logic, with the former clearly belonging in the model and the latter being application logic that goes in the controller. MVC is a pattern for putting stuff on a screen and allowing the user to interact with it, it does not specify storage at all . Most MVC-frameworks are full stack frameworks that go beyond mere MVC and do help you with storing your data, and because the data that should be stored are usually to be found in the model, these frameworks give you convenient ways of storing your model-data in a database, but that has nothing to do with MVC. Ideally, models should be persistence-agnostic and switching to a different type of storage should not affect model-code at all. Full fledged architectures have a persistence layer to handle this.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/176639", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/60669/" ] }
176,692
After 10+ years of java/c# programming, I find myself creating either: abstract classes : contract not meant to be instantiated as-is. final/sealed classes : implementation not meant to serve as base class to something else. I can't think of any situation where a simple "class" (i.e. neither abstract nor final/sealed) would be "wise programming". Why should a class be anything other than "abstract" or "final/sealed" ? EDIT This great article explains my concerns far better than I can.
Ironically, I find the opposite: the use of abstract classes is the exception rather than the rule and I tend to frown on final/sealed classes. Interfaces are a more typical design-by-contract mechanism because you do not specify any internals--you are not worried about them. It allows every implementation of that contract to be independent. This is key in many domains. For example, if you were building an ORM it would be very important that you can pass a query to the database in a uniform way, but the implementations can be quite different. If you use abstract classes for this purpose, you end up hard-wiring in components that may or may not apply to all implementations. For final/sealed classes, the only excuse I can ever see for using them is when it is actually dangerous to allow overriding--maybe an encryption algorithm or something. Other than that, you never know when you may wish to extend the functionality for local reasons. Sealing a class restricts your options for gains that are non-existent in most scenarios. It is far more flexible to write your classes in a way that they can be extended later down the line. This latter view has been cemented for me by working with 3rd party components that sealed classes thereby preventing some integration that would have made life a lot easier.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/176692", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/66892/" ] }
176,835
I'm never sure when a project is far enough along to first commit to source control. I tend to put off committing until the project is 'framework-complete,' and I primarily commit features from then on. (I haven't done any personal projects large enough to have a core framework too big for this.) I have a feeling this isn't best practice, though I'm not sure what could go wrong. Let's say, for example, I have a project which consists of a single code file. It will take about 10 lines of boilerplate code, and 100 lines to get the project working with extremely basic functionality (1 or 2 features). Should I first check in: The empty file? The boilerplate code? The first features? At some other point? Also, what are the reasons to check in at a specific point?
You should commit as soon as you have a sensible "unit" complete. What is a unit? It depends on what you're doing; if you're creating a Visual Studio project, for example, commit the solution right after its creation, even if it doesn't have anything in it. From there on, keep committing as often as possible, but still commit only completed "units" (e.g. classes, configurations, etc); doing so will make your life easier if something goes wrong (you can revert a small set of changes) and will reduce the likelihood of conflicts.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/176835", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/56016/" ] }
176,858
2¹⁶-1 & 2⁵ = 2⁵ (or? obviously ?) A developer asked me today what is bitwise 65535 & 32 i.e. 2¹⁶-1 & 2⁵ = ? I thought at first spontaneously 32 but it seemed to easy whereupon I thought for several minutes and then answered 32. 32 seems to have been the correct answer but how? 65535=2¹⁶-1=1111111111111111 (but it doesn't seem right since this binary number all ones should be -1(?)), 32 = 100000 but I could not convert that in my head whereupon I anyway answered 32 since I had to answer something. Is the answer 32 in fact trivial? Is in the same way 2¹⁶-1 & 2⁵-1 =31? Why did the developer ask me about exactly 65535? Binary what I was asked to evaluate was 1111111111111111 & 100000 but I don't understand why 1111111111111111 is not -1. Shouldn't it be -1? Is 65535 a number that gives overflow and how do I know that?
The number is treated as an unsigned integer in this case which means all bits set will not produce -1 (if it were signed then yes, you would be correct). So all 16 bits set will give you 65535. Interestingly enough though, signed state isn't a factor when doing logic bit-operations. Bits are themselves not signed as they are the lowest component in a computer. It's specified by the cpu operation if the bits in ex. a register will be treated signed or unsigned. Negative numbers are produced by setting the most significant bit (MSB) to true IF the number is treated as signed (which "side", or which outer bit will be set varies depending on cpu architecture, ie. big-endian / little-endian) .
{ "source": [ "https://softwareengineering.stackexchange.com/questions/176858", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/12893/" ] }
176,876
In school, I've been told many times to stop using public for my variables. I haven't asked why yet. This question: Are Java's public fields just a tragic historical design flaw at this point? seems kinda related to this. However, they don't seem to discuss why is it "wrong", but instead focus on how can they use them instead. Look at this (unfinished) class: public class Reporte { public String rutaOriginal; public String rutaNueva; public int bytesOriginales; public int bytesFinales; public float ganancia; /** * Constructor para objetos de la clase Reporte */ public Reporte() { } } No need to understand Spanish. All this class does is hold some statistics (those public fields) and then do some operations with them (later). I will also need to be modifying those variables often. But well, since I've been told not to use public , this is what I ended up doing: public class Reporte { private String rutaOriginal; private String rutaNueva; private int bytesOriginales; private int bytesFinales; private float ganancia; /** * Constructor para objetos de la clase Reporte */ public Reporte() { } public String getRutaOriginal() { return rutaOriginal; } public String getRutaNueva() { return rutaNueva; } public int getBytesOriginales() { return bytesOriginales; } public int getBytesFinales() { return bytesFinales; } public float getGanancia() { return ganancia; } public void setRutaOriginal(String rutaOriginal) { this.rutaOriginal = rutaOriginal; } public void setRutaNueva(String rutaNueva) { this.rutaNueva = rutaNueva; } public void setBytesOriginales(int bytesOriginales) { this.bytesOriginales = bytesOriginales; } public void setBytesFinales(int bytesFinales) { this.bytesFinales = bytesFinales; } public void setGanancia(float ganancia) { this.ganancia = ganancia; } } Looks kinda pretty. But seems like a waste of time. Google searches about "When to use public in Java" and "Why shouldn't I use public in Java" seem to discuss about a concept of mutability , although I'm not really sure how to interpret such discussions. I do want my class to be mutable - all the time.
It breaks encapsulation - the core principle here is that the fields of your class are implementation details, by exposing it as a public field, you are telling everything outside that class that it is an actual piece of data that is stored by the class - something external classes don't need to know, they just need to be able to get or set that piece of data. As soon as you make something public, external classes should be able to depend on it, and it should not change often, by implementing as a method, you are retaining the flexibility to change the implementation at a later point without affecting users of the class. If you wanted to change one of the fields so it's calculated, or retrieved from a service when its called, you wouldn't be able to without breaking other parts of your application. Other reasons are that it allows you to control access to the variable (i.e. make it immutable as you already highlighted). In setters you can also add checking code, and you can add behaviors to getters and setters that do other things when a variable is get or set (though this may not always be a good thing). You can also override a method in a derived class, you can't override a field. There are some very rare situations where it is better to have public field instead of a method - but this really only applies in very high performance applications (3D games & financial trading applications), unless you are writing one of these, avoid public fields - and if you do care about performance to that level, Java is probably not the best choice anyway. Just a note on mutability - in general you should try to make things immutable until you have a genuine reason to make them mutable - the less your code exposes in its public API, the easier it is for long term maintenance, and in a multi-threaded situation, immutable objects are threadsafe, mutable objects need to implement locking.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/176876", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/13833/" ] }
176,938
Assume I have 4 points (they are 2-dimension), which are different from each other, and I want to know whether they form a square. How to do it? (let the process be as simple as possible.)
Assuming that your square might be rotated against whatever coordinates system you have in place, you can't rely on there being any repetition of X and Y values in your four points. What you can do is calculate the distances between each of the four points. If you find the following to be true, you have a square: There are two points, say A and C which are distance x from each other, and two other points, say B and D which are also distance x from each other. Each point {A, B, C, D} is an equal distance from the two points which aren't x away. i.e.: If A is x away from C, then it will be z away from both B and D. Incidentally, the distance z will have to be SQRT(( x ^2)/2), but you don't need to confirm this. If conditions 1 and 2 are true then you have a square. NOTE: Some people are concerned about the inefficiency of square root. I didn't say that you should do this calculation, I just said that if you did you would get a predictable result! The bare minimum of work that you would need to do would be to pick a point, say A and calculate the distance to each of the other three points. If you can find that A is x from one point and z from two other points, then you just need to check those two other points against each other. If they are also x from each other then you have a square. i.e.: AB = z AC = x AD = z Since AB = AD, check BD: BD = x Just to be sure, you need to check the other sides: BC and CD. BC = z CD = z Since AC = BD and since AB = AD = BC = CD, this is therefore a square. Along the way, if you find more than two distinct edge distances then the figure cannot be a square, so you can stop looking. Working Example Implementation I have created a working example on jsfiddle (see here ). In my explanation of the algorithm, I use arbitrary points A, B, C, and D. Those arbitrary points happen to be in a certain order for the sake of walking through the example. The algorithm works even if the points are in a different order, however, the example doesn't necessarily work if those points are in a different order. Thanks to: meshuai, Blrfl, MSalters and Bart van Ingen Schenau for useful comments to improve this answer.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/176938", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/71750/" ] }
176,992
So, my father is currently in the process of "hacking" together a database using FileMaker Pro, a GUI based databasing tool for his small (4 doctor) practice. The database will be used to help ease the burden on reporting from medical machines, streamlining quite a clumsy process. He's got no programming background, and seems to be doing everything in his power to not learn things correctly. He's got duplicate data types, no database-enforced relationships (foreign/primary key constraints) and a dozen other issues. He's doing it all by hand via GUI tool using Youtube videos. My issue is, that whilst I want him to succeed 100%, I don't think it's appropriate for him to be handling these types of decisions. How do I convince him that without some sort of education in these topics, a hacked together solution is a bad idea? He's can be quite stubborn and I think he sees these types of jobs as "childs play" How should I approach this? Is it even that bad an idea - or am I correct in thinking he should hire a proper DBA/developer to handle this so that it doesn't become a maintenance nightmare? NB: I am a developer consultant of 4 years and I've seen my share of painful customer implementations. Update: So it's a few years later now, and I've had time to reflect on this question. My dad ended up implementing a solution using Google Docs, FileMaker Pro and some email hooks. He set the whole thing up himself, and he says he is getting immense value from it. If you are an experienced developer, you are perhaps reading that description and cringing. But I learnt a pretty good lesson from the whole thing actually - that people only care about the results, and not the implementation. All my dad cares about is the fact he doesn't need to enter patient information down on paper manually, and can instead quickly fill out a Google docs form. What's great is he's looking to hire a junior dev/ops person to focus solely on automation within his practice.
I've been engineering Healthcare solutions for many years. I won't go into all the different reasons that your father shouldn't be doing this; most of the reasons being academic: meaning, if you've been in the industry long enough you know how these things snowball and develop a life of their own. Instead your father, as a physician, needs to understand the professional reasons and real-life, non-academic, reasons why what he is doing is dangerous and possibly life-threatening; dangerous to his colleagues, dangerous to his patients privacy and identity, and dangerous to his practice from a legal standpoint. The danger is multi-faceted: patient privacy (HIPAA, ARRA, Meaningful Use, HITECH Compliance) what are the fields that are considered patient identifying fields (many professionals in the industry don't understand this, and just because you eliminate some of the obvious fields like last name, address, zip code there are still many other fields that would make it easy to associate clinical data to a specific patient; this, in itself, is difficult; there are companies out there making lots of money de-identifying clinical data - it's a whole domain in itself). HIPAA, HITECH and newer legislation spells out clearly how auditing should be done security should be done password requirements should the data at rest be encrypted should the data transmitted be encrypted, and how you must consider the controls if you are using any kind of hosted service (IaaS, PaaS) do you have proper BAA and DSA in place how do those hosting your servers control access how do they handle multi-tenancy (you'd be amazed at how some of these large entities do NOT handle this appropriately) if you terminate the contract with those hosting your infrastructure, how will they ensure permanent deletion of your data (NIST regulations) what are the governing controls in place for your development do you have an sdlc in place do you have traceability from requirements to code to QA do you validate 'intended' use of your medical application/device is your software being QA'd, and do you have a User Acceptance Test (UAT) environment how do you secure this environment, because you'll be using real patient data is he going to handle medicare patients, if so is he planning to use his database to report out? the government has strict controls in place for the exchange of this data to their Health Information Exchange (HIE) which leads to how will he implement his own exchange if he wants to take advantage of his clinical data repository (CDR) does he understand the particular NIST regulations he needs to abide by for data security such as permanent deletion of data (if using a hosted infrastructure) you mentioned he will be taking data from medical machines does he understand the new FDA medical device standards? starting in 2013, any digital system that displays data from medical devices can be categorized as a medical device ... this means he must meet the FDA regulatory requirements for medical devices will his team and staff be making medical decisions based on the data in his database? has he developed a solid clinical data model, flexible enough to handle the ever changing requirements (i.e., ICD-9 to ICD-10 to ICD-11 coding standards)? how will he version the data model and keep it in sync with the data (i.e., if he changes the clinical data model how will older data be represented?) will his system be able to produce an exact snapshot of the clinical data as it was seen on the day that a clinical decision was made? there are legal repercussions if he can't does he know the difference between a real delete and a logical delete, and the implications to his data model; to his storage requirements; to his practice's policies? does he have a vocabulary solution in place to handle all the different services he will need to use; much of the data needs to be coded (as opposed to free text), because he will want to take advantage of his CDR to produce ICD-9 compliant reports. And then he needs to take into account the changing of these standards; e.g., ICD-9 to ICD-10. for vocabulary, terminology or Health Data Dictionary (all basically synonyms) how will he implement and ensure that old terminology can still be rendered for old clinical decisions? will he be storing allergy data? how will his 'medical terminology' or 'vocabulary' definitions be stored? will he integrate with other terminology systems like LOINC and First Data Bank? does he have an understanding of terminology services (i.e., Health Data Dictionary) will he want to have data interfaced into his system, and maybe out to a health information exchange (HIE)? if so, does he understand HL7 and its impact on his database? does he understand interface engines and all that goes along with that? does he understand how to de-identify information? this is important in the development phase and the bug fixing phase These are just a few questions, and by no means should it be considered a comprehensive list. And for each answer there will be countless more questions. In a Healthcare database there should not be any deletion or over-writing of previous data. This means there are never going to be 'delete from where...' or 'update set ...'. Instead you will only have inserts. You can imagine how this changes your data model and your queries. Now you can be creative and come up with different solutions to attain this goal, but the fact remains that this is a requirement that is unique to the Healthcare Clinical Data repository. Just one more thought regarding the life-threatening side of this issue: Let's take, for example, allergy information; I raise this one up because institutions who have been doing this digitally for years have learned that their processes need to ensure that allergy data is captured and that we can't assume that because technology captured the data in a database it is somehow inherently correct forever. This is why patients are asked for their allergies every single time as they move from one department to another, even within the same hospital. A patient's allergies can't be deleted (updates to a row delete the old information). A clinical decision based on digital data needs to capture what was 'presented' to the clinician at the time of the decision. I know much of this may seem to be geared to a large institution. However, the regulatory parts aren't. And in any case, Healthcare Information Systems are inherently complex. Healthcare system engineering depends and recognizes the expertise and experience of good clinicians. However, there is a larger than average impedance mismatch (to borrow terminology from the ORM technology) in the Healthcare IT domain ... I venture to say larger because every domain has its mismatches. Good luck!
{ "source": [ "https://softwareengineering.stackexchange.com/questions/176992", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/31075/" ] }
176,999
I'm finding lots of 2-3k line files, and it doesn't really feel like they should be that big. What is a good criteria to objectively call a source code file "too big"?, is there such thing as a maximum amount of lines a source code file should have?
As an ideal model I use the following criteria (with a similar rationale to what Martin Beckett suggested, i.e. to think in terms of logical structure and not in terms of lines of code): Rule 1 One class per file (in C++: one class -> one header and one implementation file). Rule 2 Seven is considered the number of items that our brain can observe at the same time without getting confused. Above 7 we find it difficult to keep an overview of what we see. Therefore: each class should not have more than 7-10 methods. A class that has more than 10 method is probably too complex and you should try to split it. Splitting is a very effective method because every time you split a class you reduce the complexity of each individual class at least by a factor of 2. Rule 3 A method body that does not fit in one or two screens is too big (I assume that a screen / editor window is about 50 lines). Ideally, you can see the whole method in one window. If this is not the case, you only need to scroll up and down a bit, without forgetting the part of the method that gets hidden. So, if you have to scroll more than one screen up or down to read the whole method body, your method is probably too big and you can easily lose the overview. Again, splitting methods using private help methods can reduce method complexity very fast (at every split the complexity is at least halved). If you introduce too many private help methods you can consider creating a separate class to collect them (if you have more private methods than public ones, maybe a second class is hiding inside your main class). Putting together these very rough estimates: At most one class per source file. At most 10 public method per class. At most 10 private method per class. At most 100 lines per method. So a source file that is more than 2000 lines is probably too large and starting to be too messy. This is really a very rough estimate and I do not follow these criteria systematically (especially because there is not always enough time to do proper refactoring). Also, as Martin Beckett suggested, there are situations in which a class is a large collection of methods and it does not make sense to split them in some artificial way just to make the class smaller. Anyway, in my experience a file starts to get unreadable when one of the above parameters is not respected (e.g. a 300 line method body that spans six screens, or a source file with 5000 lines of code).
{ "source": [ "https://softwareengineering.stackexchange.com/questions/176999", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/10869/" ] }
177,030
I'm developing a big community/forum website and I'd like to upload my code to GitHub to have at least some sort of version control over it (because I have nothing other than a .rar file as a backup, not even SVN), to let others contribute to the project, and also perhaps using it to let my potential future employers see some of my code as some sort of curriculum. But what I'm wondering now, and I'm suprised I haven't seen anyone mention it before is the security aspect of it. Isn't publishing the code of a website a HUGE security hole? Is like giving a potential hacker or anyone who would like to find any potential exploit possible, even considering that the critical files aren't uploaded (database passwords, authentication scripts, etc.). Of course that there are millions of projects uploaded to GitHub and no one will find mine just 'by chance'. But if they look for it, it would indeed be there. Bottomline: my problem is not about copyright or licenses, but others finding exploits in my website. I'm I missing something here?
I'm I missing something here? Yes. Relying on people not knowing your source code to prevent them from finding security exploits in it is known as security through obscurity . The problem: it doesn't work. Skilled hackers don't need the source code to find and exploit vulnerabilities. They'll do some fuzzing to find input that causes problems and then use their knowledge of how the underlying OS/language/framework works to identify a vulnerability. It is widely agreed that having the source code public increases security by enabling well-meaning people to find vulnerabilities and fix them , or at least tell the developer about them. There are two important reasons why this works: There are generally more well-meaning than malicious people Any vulnerability found by a well-meaning person will be fixed for everyone; hackers are far less likely to collaborate Of course it doesn't work with pet projects that have few active users, but those are also exceedingly unlikely to be targeted by a hacker.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/177030", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/73532/" ] }
177,031
I'm just starting out on the implementation of a large enterprise-wide system, which has complex requirements and many stakeholders. The company has been through high-level evaluation and tender process and determined to purchase a highly configurable "off-the-shelf" product rather than building an entirely bespoke system. The system will replace several existing systems and will require a significant amount of data migration. I'm thinking that the implementation of this system (which is expected to take over 2 years) could be run in a similar way to a Scrum software development project. With the first sprints targeted at building the minimal possible functionality needed (across all functional areas), and then iteratively deepening the level of functionality according the stakeholder feedback. I think this will de-risk the project and help ensure a balance of stakeholder needs within the available time. The user stories are still the same, it's just that to implement them we have work within the constraints of the pre-purchased system. When it comes to 'building stuff', instead of writing custom code the team will be configuring the off-the-shelf package, writing data conversion scripts and the like (and it should be a lot quicker!). Does this sound like a sensible approach? Does the Agile approach makes sense here?
I'm I missing something here? Yes. Relying on people not knowing your source code to prevent them from finding security exploits in it is known as security through obscurity . The problem: it doesn't work. Skilled hackers don't need the source code to find and exploit vulnerabilities. They'll do some fuzzing to find input that causes problems and then use their knowledge of how the underlying OS/language/framework works to identify a vulnerability. It is widely agreed that having the source code public increases security by enabling well-meaning people to find vulnerabilities and fix them , or at least tell the developer about them. There are two important reasons why this works: There are generally more well-meaning than malicious people Any vulnerability found by a well-meaning person will be fixed for everyone; hackers are far less likely to collaborate Of course it doesn't work with pet projects that have few active users, but those are also exceedingly unlikely to be targeted by a hacker.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/177031", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/73534/" ] }
177,133
I got into an interesting internet argument about getter and setter methods and encapsulation. Someone said that all they should do is an assignment (setters) or a variable access (getters) to keep them "pure" and ensure encapsulation. Am I right that this would completely defeat the purpose of having getters and setters in the first place and validation and other logic (without strange side-effects of course) should be allowed? When should validation happen? When setting the value, inside the setter (to protect the object from ever entering an invalid state - my opinion) Before setting the value, outside the setter Inside the object, before each time the value is used Is a setter allowed to change the value (maybe convert a valid value to some canonical internal representation)?
I remember having a similar argument with my lecturer when learning C++ at university. I just couldn't see the point of using getters and setters when I could make a variable public. I understand better now with years of experience and I've learned a better reason than simply saying "to maintain encapsulation". By defining the getters and setters, you will provide a consistent interface so that if you should wish to change your implementation, you're less likely to break dependent code. This is especially important when you classes are exposed via an API and used in other apps or by 3rd parties. So what about the stuff that goes into the getter or setter? Getters are generally better off implemented as a simple dumbed-down passthrough for access to a value because this makes their behaviour predictable. I say generally, because I've seen cases where getters have been used to access values manipulated by calculation or even by conditional code. Generally not so good if you are creating visual components for use at design time, but seemingly handy at run time. There's no real difference however between this and using a simple method, except that when you use a method, you are generally more likely to name a method more appropriately so that the functionality of the "getter" is more apparent when reading the code. Compare the following: int aValue = MyClass.Value; and int aValue = MyClass.CalculateValue(); The second option makes it clear that the value is being calculated, whereas the first example tells you that you are simply returning a value without knowing anything about the value itself. You could perhaps argue that the following would be clearer: int aValue = MyClass.CalculatedValue; The problem however is that you are assuming that the value has already been manipulated elsewhere. So in the case of a getter, while you may wish to assume that something else might be going on when you are returning a value, it is difficult to make such things clear in the context of a property, and property names should never contain verbs otherwise it makes it difficult to understand at a glance whether the name used should be decorated with parentheses when accessed. Setters are a slightly different case however. It is entirely appropriate that a setter provide some additional processing in order to validate the data being submitted to a property, throwing an exception if setting a value would violate the defined boundaries of the property. The problem that some developers have with adding processing to setters however is that there is always a temptation to have the setter do a little more, such as perform a calculation or a manipulation of the data in some manner. This is where you can get side-effects that can in some cases be either unpredictable or undesirable. In the case of setters I'm always apply a simple rule of thumb, which is to do as little as possible to the data. For example, I will usually allow either boundary testing, and rounding so that I can both raise exceptions if appropriate, or avoid unnecessary exceptions where they can be sensibly avoided. Floating point properties are a good example where you might wish to round excessive decimal places to avoid raising an exception, while still allowing in range values to be entered with a few additional decimal places. If you apply some sort of manipulation of the setter input, you have the same problem as with the getter, that it is difficult to allow others to know what the setter is doing by simply naming it. For example: MyClass.Value = 12345; Does this tell you anything about what is going to happen to the value when it is given to the setter? How about: MyClass.RoundValueToNearestThousand(12345); The second example tells you exactly what is going to happen to your data, while the first won't let you know if you value is going to be arbitrarily modified. When reading code, the second example will be much clearer in it's purpose and function. Am I right that this would completely defeat the purpose of having getters and setters in the first place and validation and other logic (without strange side-effects of course) should be allowed? Having getters and setters isn't about encapsulation for the sake of "purity", but about encapsulating in order to allow code to be easily refactored without risking a change to the class's interface which would otherwise break the class's compatibility with calling code. Validation is entirely appropriate in a setter, however there is a small risk is that a change to the validation could break compatibility with calling code if the calling code relies on the validation occurring in a particular way. This is a generally rare and relatively low-risk situation, but it should be noted for the sake of completeness. When should validation happen? Validation should happen within the context of the setter prior to actually setting the value. This ensures that if an exception is thrown, the state of your object won't change and potentially invalidate its data. I generally find it better to delegate validation to a separate method which would be the first thing called within the setter, in order to keep the setter code relatively uncluttered. Is a setter allowed to change the value (maybe convert a valid value to some canonical internal representation)? In very rare cases, maybe. In general, it is probably better not to. This is the sort of thing best left to another method.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/177133", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/36622/" ] }
177,167
I am a team lead with 5+ developers. I have a developer (let's call him A ) who is a good programmer, who writes good clean, easy to understand code. However he is somewhat difficult to manage, and sometimes I wonder whether he is really under-performing or not. Our company requires the developers to indicate the work progress in the bug tracker we use, not so much as to monitor the programmers but to keep the stakeholders apprised of the progress. The thing is, A only updates a task progress when it is done ( maybe 3 weeks after it is first worked on) and this leaves everyone wondering what is going on in the middle of the development week. He wouldn't change his habit despite repeated probing. ( It's OK, developers hate paperwork, I do, too) Recent 2-3 months he on leave quite often due to various events-- either he is sick, or have to attend a lot of personal events etc. ( It's OK, bad things happen in a string. It's just a coincidence) We define sprints, or roadmaps for each month. And in the beginning of the sprint, we will discuss the amount of work each of the developers have to do in a sprint and the developers get to set the amount of time they need for each task . He usually won't be able to complete all of them. (It's OK, the developers are regularly missing deadlines not due to their fault). I am based in Singapore. Not sure if that matters. Yeah, Asians are known to be reticent, but does that matter? If only one or two of the above events happen, I won't feel that A is under-performing, but they all happen together. So I have the feeling that A is under-performing and maybe-- God forbid--- slacking off. This is just a feeling based on my years of experience as programmer. But I could be wrong. It is notoriously hard to measure the work of a programmer, given that not all two tasks are alike, and there lacks a standard objective to measure the commitment of a programmer to your company. It is downright impossible to tell whether the programmer is doing his job or slacking off. All you can do, is to trust them-- yeah, trusting and giving them autonomy is the best way for programmers to work, I know that, so don't start a lecture on why you need to trust your programmers, thank you every much -- but if they abuse your trust, can you know? Outcome: I've a straight talk with him regarding my perception on his performance. He was indignant when I suggested that I had the feeling that he wasn't performing at his best level. He felt that this was a completely unfair feeling. I then replied that this was my feeling and I didn't know whether my feeling was right or not. He would have none of this and ended the discussion immediately. Before he left he said that he "would try to give more to the company" in a very cold tone. I was taken aback by his reaction. I am sure that I offended him in some ways. Not too sure whether that was the right thing to do for me to be so frank with him, though. My question is: How can you tell whether your programmers are under-performing? Surely there are experience team leads who know better than me on this? Extra notes: I hate micromanaging. So all that we have for our software process is Sprint ( where tasks get prioritized and assigned, and at the end of the month, a review of the amount of work done). Developers would require to update the tasks as they go along everyday. There is no standup meeting, or anything of the sort. Mainly because we have the freedom to work from home and everyone cherishes this freedom. Although I am the one who sets the deadline, but the developers will provide the estimate for each tasks and I will decide-- based on the estimate-- the tasks that go into a particular sprint. If they can't finish the tasks at the end of the sprint, I will push them to the next. So theoretically one can just do only 1 or 2 tasks during the whole sprint and then push the remaining 99 tasks to the next sprint and still he will be fine as long as justifies this-- in the form of daily work progress updates
This should be a surprisingly easy problem to solve. Have a second meeting with him. Tell him that you accept that it's probably your perception of reality that is at fault. Then qualify that with "however, if that is the case then we need to work together to improve my perception." Finally challenge him to solve that problem, so he doesn't feel micro-managed. This exact thing happened to me a long time ago. For me, the issue was that I dislike the possibility that anyone might think I'm seeking extra credit for simply doing my job. And that was fair enough, but there has to be a regular feedback loop between any member of staff and their line-manager. If there isn't then you get these problems. Regular, planned, 1:1s are a great idea. And, as people have pointed out, standups do not need to be orthogonal to working from home. But they must involve the three questions: What did you do yesterday? What are you planning to do today? And the one most people forget ... What (if anything) is holding you up? That said, you should try to discourage situations where team members never work together. I've worked in that situation before and it seeded distrust within the team and outside it. Have a regular day that you all come into the office. Have a regular meeting where people can voice some ideas on improving processes or whatever. Don't make it a line-reporting event. Make it an opportunity to just talk. You'll be surprised what you learn. If possible, turn that into a social event, go for a couple of drinks on work time as a bonding exercise.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/177167", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/73634/" ] }
177,184
I saw lot of code (for example some Android source code) where fields name start with a m while static fields start with s Example (taken from Android View class source): private SparseArray<Object> mKeyedTags; private static int sNextAccessibilityViewId; I was wondering what m and s stand for... maybe is m mutable and s static?
m is typically for a public member (see this answer for common C code conventions Why use prefixes on member variables in C++ classes ). I've never seen s before, but based on that answer: m for members c for constants/readonlys p for pointer (and pp for pointer to pointer) v for volatile s for static i for indexes and iterators e for events Have you read any published standards for the project you've seen that code in? One of the most famous prefix notation systems is Hungarian Notation . There is a excellent blog post by Joel Spolsky on prefixes: Making Wrong Code Look Wrong
{ "source": [ "https://softwareengineering.stackexchange.com/questions/177184", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/73648/" ] }
177,342
Passwords shouldn't be stored in plain text for obvious security reasons: you have to store hashes, and you should also generate the hash carefully to avoid rainbow table attacks. However, usually you have the requirement to store the last n passwords and to enforce minimal complexity and minimal change between the different passwords (to prevent the user from using a sequence like Password_1, Password_2, ..., Password_ n ). This would be trivial with plain text passwords, but how can you do that by storing only hashes? In other words: how it is possible to implement a safe password history mechanism?
Store the hashes and verify an entered password against those stored hashes, the same way you verify a password when logging in. You would have to generate 'alternative' passwords from the one given based on numerical patterns to detect your 'minimal' changes. On login, you verify the entered password against a hash already, there is no need to store the password in plaintext. The same trick works when it comes to changing a password, simply check the entered and 'minimal change' generated passwords against the historical hashes. If the new password is satisfactory, move the current password hash over to the historical set, and replace it with a new hash for the new password.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/177342", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/11/" ] }
177,428
I really like google golang but could some one explain what the rationale is for the implementors having left out a basic data structure such as sets from the standard library?
One potential reason for this omission is that it's really easy to model sets with a map. To be honest I think it's a bit of an oversight too, however looking at Perl, the story's exactly the same. In Perl you get lists and hashtables, in Go you get arrays, slices, and maps. In Perl you'd generally use a hashtable for any and all problems relating to a set, the same is applicable to Go. Example to imitate a set ints in Go, we define a map: set := make(map[int]bool) To add something is as easy as: i := valueToAdd() set[i] = true Deleting something is just delete(set, i) And the potential awkwardness of this construct is easily abstracted away: type IntSet struct { set map[int]bool } func (set *IntSet) Add(i int) bool { _, found := set.set[i] set.set[i] = true return !found //False if it existed already } And delete and get can be defined similarly, I have the complete implementation here . The major disatvantage here is the fact that go doesn't have generics. However it is possible to do this with interface{} in which case you'd have cast the results of get.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/177428", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/55045/" ] }
177,501
I recently found out that I can have two interfaces, one containing a method with the same signature as a method in the other interface. And I can have an interface or class that implements both of afore-mentioned interfaces. So the descendant class/interface implicitly implements two different methods as one method. Why is this allowed in Java? I can see numerous problems arising from this. Eclipse, for example, can only find out about implementations for one interface method, but for the second one it doesn't show any implementations at all. Also, I believe that there would be problems with automatic refactoring, like when you want to change the signature of the method in one of the interfaces and the IDE will be unable to correctly change that signature in all implementations (since they implement two different interfaces, and the IDE cannot tell which interface method the implementation is referring to.) Why don't just make a compiler error like 'interfaces method names clashes' or something like that?
There is no reason why this should be forbidden. The only point of an interface is to ensure that a method with particular signature exists in each implementing class. This is satisfied by any implementing class even if the condition is posed twice. Granted, when you write an interface you presumably expect a certain meaning for the action of invoking the method, and presumably you document it above the declaration in the interface, but that is not the concern of the compiler. It cannot check whether the implementing class does the right thing, only whether it copies the signature exactly. Asking "Why doesn't the compiler forbid one method to satisfy two interface declarations?" boils down to "why doesn't the compiler prevent me from implementing the wrong semantics when I implement an interface?", and the answer to that question is much easier to see: because it can't! (If the compiler were able to judge your method implementation and forbid it if it contained a bug, then we wouldn't need programmers in the first place, we'd need only the specifications and the compiler.) Obviously we would like it if implementing an interface guaranteed that the implementing class does the right thing, but that's not something that interfaces can do for you. In fact, I'd argue that it would be a bad thing to add a feature to the compiler that might give the impression that it is!
{ "source": [ "https://softwareengineering.stackexchange.com/questions/177501", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/66605/" ] }
177,539
Say I have a method like this: public void OrderNewWidget(Widget widget) { if ((widget.PartNumber > 0) && (widget.PartAvailable)) { WigdetOrderingService.OrderNewWidgetAsync(widget.PartNumber); } } I have several such methods in my code (the front half to an async Web Service call). I am debating if it is useful to get them covered with unit tests. Yes there is logic here, but it is only guard logic. (Meaning I make sure I have the stuff I need before I allow the web service call to happen.) Part of me says "sure you can unit test them, but it is not worth the time" (I am on a project that is already behind schedule). But the other side of me says, if you don't unit test them, and someone changes the Guards, then there could be problems. But the first part of me says back, if someone changes the guards, then you are just making more work for them (because now they have to change the guards and the unit tests for the guards). For example, if my service assumes responsibility to check for Widget availability then I may not want that guard any more. If it is under unit test, I have to change two places now. I see pros and cons in both ways. So I thought I would ask what others have done.
Part of me says "sure you can unit test them, but it is not worth the time" (I am on a project that is already behind schedule). It's three very short tests. You spent as much time asking yourself the question. But the other side of me says, if you don't unit test them, and someone changes the Guards, then there could be problems. Listen to this side. But the first part of me says back, if someone changes the guards, then you are just making more work for them (because now they have to change the guards and the unit tests for the guards). If your maintainer is a TDD nut, you're making it more difficult for them. Any change I make without there being a related change or addition of tests leads to my having to think hard. In fact, I would probably add the tests before I go ahead and make the change. The first part of you is just plain wrong. Give the second part a pat on the back and stop thinking about it.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/177539", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/71/" ] }
177,649
I have been looking at the terms constructor injection and dependency injection while going through articles on (Service locator) design patterns. When I googled about constructor injection, I got unclear results, which prompted me to check in here. What is constructor injection? Is this a specific type of dependency injection? A canonical example would be a great help. Edit Revisiting this questions after a gap of a week, I can see how lost I was... Just in case anyone else pops in here, I will update the question body with a little learning of mine. Please do feel free to comment/correct. Constructor injection and property injection are two types of Dependency Injection.
I'm no expert, but I think I can help. And yes, it's a specific type of Dependency Injection. Disclaimer: Almost all of this was "stolen" from the Ninject Wiki Let’s examine the idea of dependency injection by walking through a simple example. Let’s say you’re writing the next blockbuster game, where noble warriors do battle for great glory. First, we’ll need a weapon suitable for arming our warriors. class Sword { public void Hit(string target) { Console.WriteLine("Chopped {0} clean in half", target); } } Then, let’s create a class to represent our warriors themselves. In order to attack its foes, the warrior will need an Attack() method. When this method is called, it should use its Sword to strike its opponent. class Samurai { readonly Sword sword; public Samurai() { this.sword = new Sword(); } public void Attack(string target) { this.sword.Hit(target); } } Now, we can create our Samurai and do battle! class Program { public static void Main() { var warrior = new Samurai(); warrior.Attack("the evildoers"); } } As you might imagine, this will print Chopped the evildoers clean in half to the console. This works just fine, but what if we wanted to arm our Samurai with another weapon? Since the Sword is created inside the Samurai class’s constructor, we have to modify the implementation of the class in order to make this change. When a class is dependent on a concrete dependency, it is said to be tightly coupled to that class . In this example, the Samurai class is tightly coupled to the Sword class. When classes are tightly coupled, they cannot be interchanged without altering their implementation. In order to avoid tightly coupling classes, we can use interfaces to provide a level of indirection. Let’s create an interface to represent a weapon in our game. interface IWeapon { void Hit(string target); } Then, our Sword class can implement this interface: class Sword : IWeapon { public void Hit(string target) { Console.WriteLine("Chopped {0} clean in half", target); } } And we can alter our Samurai class: class Samurai { readonly IWeapon weapon; public Samurai() { this.weapon = new Sword(); } public void Attack(string target) { this.weapon.Hit(target); } } Now our Samurai can be armed with different weapons. But wait! The Sword is still created inside the constructor of Samurai . Since we still need to alter the implementation of Samurai in order to give our warrior another weapon, Samurai is still tightly coupled to Sword . Fortunately, there is an easy solution. Rather than creating the Sword from within the constructor of Samurai , we can expose it as a parameter of the constructor instead. Also known as Constructor Injection. class Samurai { readonly IWeapon weapon; public Samurai(IWeapon weapon) { this.weapon = weapon; } public void Attack(string target) { this.weapon.Hit(target); } } As Giorgio pointed out, there's also property injection. That would be something like: class Samurai { IWeapon weapon; public Samurai() { } public void SetWeapon(IWeapon weapon) { this.weapon = weapon; } public void Attack(string target) { this.weapon.Hit(target); } } Hope this helps.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/177649", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/62775/" ] }
177,651
I was a freelance web developer until circa 2004 when I started going down the management route but have decided to try to get back into development again (specifically JavaScript and HTML5 web/mobile web apps) and I really get the impression to be truly good at these and similar fast moving technologies a constant amount of time is required to be set aside to invest in getting better at existing skills in addition to learning new skills. I understand right now since I am getting back into things there is a pretty steep learning curve, but seeing how good many guys are out there - the only way I see of getting up there is putting in a serious amount of time. For those working as fulltime developers, what I am trying to understand is this - on most days, how much time in the office is spent actually grinding out code compared to learning/research. I could easily spend 2-4 hours daily getting on top of the best ways to go about doing things. Do most good developers who are employed full time invest significant hours outside of work sharpening their skills? Or maybe I'm looking at all of this completely wrong?
To be honest I use newsfeed reader. I subscribe to a number of blogs and technology related sites. I'll read my feed during lunch, before work and sometimes after work. However I use my tablet for that and will constantly review news sources for if they provide a good time to value ratio. I probably get 1-2 hours a day reading about new things. Generally I will not waste time on reading comments or commenting unless its a real knowledge transfer.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/177651", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/72019/" ] }
177,666
I have a few (view) classes. Table, Tree, PagingColumn, SelectionColumn, SparkLineColumn, TimeColumn. currently they're flat under app/view like this: app/view/Table app/view/Tree app/view/PagingColumn ... I thought about restructuring it, because the Trees and Tables use the columns, but there are some columns, which only work in a tree, some who work in trees and tables and in the future there are probably some who only work in tables, I don't know. My first idea was like this: app/view/Table app/view/Tree app/view/column/PagingColumn app/view/column/SelectionColumn app/view/column/SparkLineColumn app/view/column/TimeColumn But since the SelectionColumn is explicitly for trees, I have the fear that future developers could get the idea of missuse them. But how to restructure it probably? Like this: app/view/table/panel/Table app/view/tree/panel/Tree app/view/tree/column/PagingColumn app/view/tree/column/SelectionColumn app/view/column/SparkLineColumn app/view/column/TimeColumn Or like this: app/view/Table app/view/Tree app/view/column/SparkLineColumn app/view/column/TimeColumn app/view/column/tree/PagingColumn app/view/column/tree/SelectionColumn
To be honest I use newsfeed reader. I subscribe to a number of blogs and technology related sites. I'll read my feed during lunch, before work and sometimes after work. However I use my tablet for that and will constantly review news sources for if they provide a good time to value ratio. I probably get 1-2 hours a day reading about new things. Generally I will not waste time on reading comments or commenting unless its a real knowledge transfer.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/177666", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/55399/" ] }
177,668
I have a problem when implementing the MVC-pattern on iOS. I have searched the Internet but seems not to find any nice solution to this problem. Many UITableViewController implementations seems to be rather big. Most examples I have seen lets the UITableViewController implement <UITableViewDelegate> and <UITableViewDataSource> . These implementations are a big reason why UITableViewController is getting big. One solution would be to create separate classes that implements <UITableViewDelegate> and <UITableViewDataSource> . Of course these classes would have to have a reference to the UITableViewController . Are there any drawbacks using this solution? In general I think you should delegate the functionality to other "Helper" classes or similar, using the delegate pattern. Are there any well established ways of solving this problem? I do not want the model to contain too much functionality, nor the view. I believe that the logic should really be in the controller class, since this is one of the cornerstones of the MVC-pattern. But the big question is: How should you divide the controller of a MVC-implementation into smaller manageable pieces? (Applies to MVC in iOS in this case) There might be a general pattern for solving this, although I am specifically looking for a solution for iOS. Please give an example of a good pattern for solving this issue. Please provide an argument why your solution is awesome.
I avoid using UITableViewController , as it puts lots of responsibilities into a single object. Therefore I separate the UIViewController subclass from the data source and delegate. The view controller's responsibility is to prepare the table view, create a data source with data, and hook those things together. Changing the way the tableview is represented can be done without changing the view controller, and indeed the same view controller can be used for multiple data sources that all follow this pattern. Similarly, changing the app workflow means changes to the view controller without worrying about what happens to the table. I've tried separating the UITableViewDataSource and UITableViewDelegate protocols into different objects, but that usually ends up being a false split as almost every method on the delegate needs to dig into the datasource (e.g. on selection, the delegate needs to know what object is represented by the selected row). So I end up with a single object that's both the datasource and delegate. This object always provides a method -(id)tableView: (UITableView *)tableView representedObjectAtIndexPath: (NSIndexPath *)indexPath which both the data source and delegate aspects need to know what they're working on. That's my "level 0" separation of concerns. Level 1 gets engaged if I have to represent objects of different kinds in the same table view. As an example, imagine that you had to write the Contacts app—for a single contact, you might have rows representing phone numbers, other rows representing addresses, others representing email addresses, and so on. I want to avoid this approach: - (UITableViewCell *)tableView: (UITableView *)tableView cellForRowAtIndexPath: (NSIndexPath *)indexPath { id object = [self tableView: tableView representedObjectAtIndexPath: indexPath]; if ([object isKindOfClass: [PhoneNumber class]]) { //configure phone number cell } else if … } Two solutions have presented themselves so far. One is to dynamically construct a selector: - (UITableViewCell *)tableView: (UITableView *)tableView cellForRowAtIndexPath: (NSIndexPath *)indexPath { id object = [self tableView: tableView representedObjectAtIndexPath: indexPath]; NSString *cellSelectorName = [NSString stringWithFormat: @"tableView:cellFor%@AtIndexPath:", [object class]]; SEL cellSelector = NSSelectorFromString(cellSelectorName); return [self performSelector: cellSelector withObject: tableView withObject: object]; } - (UITableViewCell *)tableView: (UITableView *)tableView cellForPhoneNumberAtIndexPath: (NSIndexPath *)indexPath { // configure phone number cell } In this approach, you don't need to edit the epic if() tree to support a new type - just add the method that supports the new class. This is a great approach if this table view is the only one that needs to represent these objects, or needs to present them in a special way. If the same objects will be represented in different tables with different data sources, this approach breaks down as the cell creation methods need sharing across the data sources—you could define a common superclass that provides these methods, or you could do this: @interface PhoneNumber (TableViewRepresentation) - (UITableViewCell *)tableView: (UITableView *)tableView representationAsCellForRowAtIndexPath: (NSIndexPath *)indexPath; @end @interface Address (TableViewRepresentation) //more of the same… @end Then in your data source class: - (UITableViewCell *)tableView: (UITableView *)tableView cellForRowAtIndexPath: (NSIndexPath *)indexPath { id object = [self tableView: tableView representedObjectAtIndexPath: indexPath]; return [object tableView: tableView representationAsCellForRowAtIndexPath: indexPath]; } This means that any data source that needs to display phone numbers, addresses etc. can just ask whatever object is represented for a table view cell. The data source itself no longer needs to know anything about the object being displayed. "But wait," I hear a hypothetical interlocutor interject, "doesn't that break MVC? Aren't you putting view details into a model class?" No, it doesn't break MVC. You can think of the categories in this case as being an implementation of Decorator ; so PhoneNumber is a model class but PhoneNumber(TableViewRepresentation) is a view category. The data source (a controller object) mediates between the model and the view, so the MVC architecture still holds. You can see this use of categories as decoration in Apple's frameworks, too. NSAttributedString is a model class, holding some text and attributes. AppKit provides NSAttributedString(AppKitAdditions) and UIKit provides NSAttributedString(NSStringDrawing) , decorator categories that add drawing behaviour to these model classes.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/177668", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/45007/" ] }
177,731
If a programmer contacts you and asks to contribute to your project, how do you handle it? You don't know if this guy is any good. Perhaps he'll be more trouble than he's worth. He might be trying to attach his name to a successful project just for the kudos. He might be trying to take the project in a direction you don't really want, adding features you think aren't worth the extra complexity. Or, he might be a very useful contributor. You just don't know. How do you handle such requests from people you don't know (On GitHub, specifically, if that makes any difference)? What's the etiquette here?
Why not let this eager person send you a pull request? You'll have the opportunity to review and critique that person's code. This seems like the simplest solution.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/177731", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/59600/" ] }
177,806
As a java programmer, I have always been critical of Unchecked Exceptions. Mostly programmers use it as an en-route to coding easiness only to create trouble later. Also the programs (though untidy) with checked exceptions are much robust compared to unchecked counterparts. Surprisingly in Scala, there is nothing called Checked Exceptions. All the Java checked and unchecked are unchecked in Scala. What is the motivation behind this decision? For me it opens wide range of problems when using any external code. And if by chance the documentation is poor, it results in KILL.
Checked exceptions are mostly considered a failure. Note that no languages created after Java adopted them. See http://www.artima.com/intv/handcuffs2.html , http://googletesting.blogspot.ru/2009/09/checked-exceptions-i-love-you-but-you.html , http://www.mindview.net/Etc/Discussions/CheckedExceptions , etc. In particular, they are uncomposable (except by reverting to throws Exception ). In Scala you have a better option: using algebraic types for return values such as Option[T] , Either[Exception, T] , your own type when you want the user to handle specific cases (e.g. instead of def foo: Int // throws FileNotFoundException, IllegalStateException you have sealed trait FooResult case class Success(value: Int) extends FooResult case class FileNotFound(file: File) extends FooResult case object IllegalState extends FooResult def foo: FooResult and the consumer now is required to handle all results) For dealing with external code which does throw exceptions, you have scala.util.control.exception or scala.util.Try (starting with Scala 2.10).
{ "source": [ "https://softwareengineering.stackexchange.com/questions/177806", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/36605/" ] }
177,853
Representing geographical locations within an application, the design of the underlying data model suggests two clear options (or maybe more?). One table with a self referencing parent_id column uk - london (london parent id = UK id) or two tables, with a one to many relationship using a foreign key. My preference is for one self-refercing table as it easily allows to extend into as many sub regions as required. IN general do people veer away from self referencing tables, or are they A-OK ?
Nothing wrong with self-referencing tables. It is the common database design pattern for deeply (infinity?) nested hierarchies.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/177853", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/2297/" ] }
177,875
Almost every article I've read 1 comparing Git and Mercurial it seems like Mercurial has a better command line UX with each command being limited to one idea only (unlike say git checkout ). But at some point Git suddenly became looking super popular and number of Git submitters on Debian popcon graph (see graph image below) literally exploded. Source: Debian What happened in 2010-01 that things suddenly changed. Looks like GitHub was founded earlier than that - 2008.
The package "gnuit" (GNU Interactive Tools, a file browser/viewer and process viewer) was called "git" in Debian up until 2009-09-09, while git was called "git-core". Therefore, a better graph to look at is: Which shows that the popularity did not rise dramatically (take the green line for the left part until they cross, then take the red line).
{ "source": [ "https://softwareengineering.stackexchange.com/questions/177875", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/5940/" ] }
178,038
I don't think Objective C was in use from the beginning of Apple hardware development. What languages did app developers use for the earlier Apple computers, such as Apple II or Mac Classic?
In 1985 Larry Tesler developed a Pascal flavour for Apple, Object Pascal , that became the standard language for System 6 . It was based on Clascal , a 1983 Pascal variant for the Lisa , also developed at Apple. Object Pascal was used in MacApp , Apple's primary application framework at the time. MacApp 3.0, released in 1991, was re-written in C++ and Apple subsequently dropped support for Object Pascal in favour of C++ when they moved from Motorola's 68K chips to PowerPC. Borland's Object Pascal, that today lives on as Embarcadero Delphi , started out in 1986 as a set of extensions to Turbo Pascal , that were intended to be similar to Apple's Object Pascal. Niklaus Wirth, Pascal's originator, was consulted by both Apple and Borland for their respective variants. Conversely, Objective C was NeXTSTEP's main language and was introduced at Apple only after they purchased NeXT in 1996.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/178038", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/4526/" ] }
178,149
I'm working on a software problem at work that is fairly generic, but I can't find a library I like to solve it, so I'm considering writing one myself (at least a bare-bones version). I'll be writing some if not all of the 1.0 version at work, since I need it for the project. If turns out well I might want to bring the work home and polish it up just for fun, and maybe release it as an open-source project. However, I'm concerned that if I wrote the 1.0 version at work I may not be allowed to do this from a legal sense. Obviously I could ask my boss (who probably won't care), but I'm curious how other programmers have dealt with this issue and where the law stands here. My one sentence question is, When is it okay (legally/ethically) to open-source a software tool originally written by you for work at work? What if you have expanded the original source significantly during off-hours? Follow-up: Suppose I write the whole thing at home on my time then simply use it at work, does that change things drastically? Follow-up 2 : Note that I'm not trying to rip off my employer (I understand that they're paying me to build products that they own)--I'm just wondering if there's a fair way of doing this for all involved... It would be nice if some nonprofit down the road could use my code and save them some time. Also, there's another issue at stake. If I write the library for a very simple, generic thing (like HTML tables in Javascript), does that mean I can never again do so on my own time without putting myself at legal risk (even if it was a whole new fresh rewrite or a segment of a larger project). Am I surrendering my right to write code for this sort of project for the rest of my life (without this company's permission), since the code at work might still be somewhere in my brain influencing me? This seems related to software patents, as a side-note.
It is almost never OK, legally or ethically, to release products that you have created using your employer's resources or while being payed by the employer for your time without permission. However, it depends on your employment contract. If you were paid by the company and/or used company resources to produce the product, chances are that the work belongs to your company. You need to go through your supervisor and your legal department. Depending on your employment contract, there might also be restrictions on working on related technologies or using knowledge gained at your employer in projects, even if you work on them using personal resources on your own time. If you are using paid time, company resources, or are developing something that might be considered related to the business of your company, always seek guidance from your manager and/or legal department to ensure that you aren't violating any agreements and to get the appropriate permission to work on projects. Typically, it's easier to do this before you begin work as it might change the approaches that you take on the project. Writing products for the use at work on your own time is questionable and depends on the regulations that your employer must adhere to. At the very least, you could be interfering with your employers schedule, budget, and estimates by taking work off-line. In some cases, you could be violating the contractual regulations by creating products outside of time that is tracked and billed appropriately.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/178149", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/30483/" ] }
178,177
There is an option in C# to execute code unchecked. It's generally not advised to do so, as managed code is much safer and it overcomes a lot of problems. However I am wondering, if you're sure your code won't cause errors, and you know how to handle memory then why (if you like fast code) follow the general advice? I am wondering this since I wrote a program for a video camera, which required some extremely fast bitmap manipulation. I made some fast graphical algorithms myself, and they work excellent on the bitmaps using unmanaged code. Now I wonder in general, if you're sure you don't have memory leaks, or risks of crashes, why not use unmanaged code more often? PS my background: I kinda rolled into this programming world and I work alone (I do so for a few years) and so I hope this software design question isn't that strange. I don't really have other people out there like a teacher to ask such things.
Well, it's mostly a case of the age old adage Don't optimize (for experts only) Don't optimize yet But actually I can think of three main reason to avoid unsafe code. Bugs : The critical part of your question is "if you're sure your code won't cause errors". Well, how can you be absolutely totally sure? Did you use a prover with formal method that guaranteed your code correct? One thing is certain in programming, and that is that you will have bugs. When you take off a safety, you allow new sort of bugs to creep trough. When you let the garbage collector take care of the memory for you, a lot of problem go away. Not always as fast as you think : The other point is : depending on the problem, the gain may not be that great. Although I can't seem to find them right now, I remember a study by Google comparing the speed of Java, Scala, Go and C++. Once optimized to the ground, of course C++ was much faster. But the algorithm programmed in the "idiomatic" way were not really that much faster. Idiomatic in the sense that they were using standard structure and idioms (stl container, no unrolled loop, etc). Microsoft did a similar experiment with C# and C++. Raymond Chen, one of the top Microsoft Engineer, had to write his own implementation of std::string to beat C#. (see: http://www.codinghorror.com/blog/2005/05/on-managed-code-performance-again.html ) For much less effort, you got pretty decent performance in managed code, so its often not worth the trouble. Reusability : Unsafe code can only be used in a full trust environment. For example, in an ASP.NET server, you usually can't use unsafe code, since it would be pretty easy to introduce a vulnerability by buffer overflow. Another example would be clickonce. Or if your application was accessed from a network share. So if you plan to use your code in a variety of deployment scenario, unsafe code is out of the game. So basically : its frowned upon because it may introduce unnecessary bugs, it may well be for no gain at all, and it reduce the reusability of your code. But if your scenario really require it for performance (and you have data to prove it), you are an experienced programmer that know how to handle memory and your code will be used in a controlled environment, then sure, go for it.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/178177", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/72591/" ] }
178,218
When I write code in Visual Studio, ReSharper (God bless it!) often suggests me to change my old-school for loop in the more compact foreach form. And often, when I accept this change, ReSharper goes a step forward, and suggests me to change it again, in a shiny LINQ form. So, I wonder: are there some real advantages, in these improvements? In pretty simple code execution, I cannot see any speed boost (obviously), but I can see the code becoming less and less readable... So I wonder: is it worth it?
for vs. foreach There is a common confusion that those two constructs are very similar and that both are interchangeable like this: foreach (var c in collection) { DoSomething(c); } and: for (var i = 0; i < collection.Count; i++) { DoSomething(collection[i]); } The fact that both keywords start by the same three letters doesn't mean that semantically, they are similar. This confusion is extremely error-prone, especially for beginners. Iterating through a collection and doing something with the elements is done with foreach ; for doesn't have to and shouldn't be used for this purpose , unless you really know what you're doing. Let's see what's wrong with it with an example. At the end, you'll find the full code of a demo application used to gather the results. In the example, we are loading some data from the database, more precisely the cities from Adventure Works, ordered by name, before encountering "Boston". The following SQL query is used: select distinct [City] from [Person].[Address] order by [City] The data is loaded by ListCities() method which returns an IEnumerable<string> . Here is what foreach looks like: foreach (var city in Program.ListCities()) { Console.Write(city + " "); if (city == "Boston") { break; } } Let's rewrite it with a for , assuming that both are interchangeable: var cities = Program.ListCities(); for (var i = 0; i < cities.Count(); i++) { var city = cities.ElementAt(i); Console.Write(city + " "); if (city == "Boston") { break; } } Both return the same cities, but there is a huge difference. When using foreach , ListCities() is called one time and yields 47 items. When using for , ListCities() is called 94 times and yields 28153 items overall. What happened? IEnumerable is lazy . It means that it will do the work only at the moment when the result is needed. Lazy evaluation is a very useful concept, but has some caveats, including the fact that it's easy to miss the moment(s) where the result will be needed, especially in the cases where the result is used multiple times. In a case of a foreach , the result is requested only once. In a case of a for as implemented in the incorrectly written code above , the result is requested 94 times , i.e. 47 × 2: Every time cities.Count() is called (47 times), Every time cities.ElementAt(i) is called (47 times). Querying a database 94 times instead of one is terrible, but not the worse thing which may happen. Imagine, for example, what would happen if the select query would be preceded by a query which also inserts a row in the table. Right, we would have for which will call the database 2,147,483,647 times, unless it hopefully crashes before. Of course, my code is biased. I deliberately used the laziness of IEnumerable and wrote it in a way to repeatedly call ListCities() . One can note that a beginner will never do that, because: The IEnumerable<T> doesn't have the property Count , but only the method Count() . Calling a method is scary, and one can expect its result to not be cached, and not suitable in a for (; ...; ) block. The indexing is unavailable for IEnumerable<T> and it's not obvious to find the ElementAt LINQ extension method. Probably most beginners would just convert the result of ListCities() to something they are familiar with, like a List<T> . var cities = Program.ListCities(); var flushedCities = cities.ToList(); for (var i = 0; i < flushedCities.Count; i++) { var city = flushedCities[i]; Console.Write(city + " "); if (city == "Boston") { break; } } Still, this code is very different from the foreach alternative. Again, it gives the same results, and this time the ListCities() method is called only once, but yields 575 items, while with foreach , it yielded only 47 items. The difference comes from the fact that ToList() causes all data to be loaded from the database. While foreach requested only the cities before "Boston", the new for requires all cities to be retrieved and stored in memory. With 575 short strings, it probably doesn't make much difference, but what if we were retrieving only few rows from a table containing billions of records? So what is foreach , really? foreach is closer to a while loop. The code I previously used: foreach (var city in Program.ListCities()) { Console.Write(city + " "); if (city == "Boston") { break; } } can be simply replaced by: using (var enumerator = Program.ListCities().GetEnumerator()) { while (enumerator.MoveNext()) { var city = enumerator.Current; Console.Write(city + " "); if (city == "Boston") { break; } } } Both produce the same IL. Both have the same result. Both have the same side effects. Of course, this while can be rewritten in a similar infinite for , but it would be even longer and error-prone. You're free to choose the one you find more readable. Want to test it yourself? Here's the full code: using System; using System.Collections.Generic; using System.Data; using System.Data.SqlClient; using System.Diagnostics; using System.Linq; public class Program { private static int countCalls; private static int countYieldReturns; public static void Main() { Program.DisplayStatistics("for", Program.UseFor); Program.DisplayStatistics("for with list", Program.UseForWithList); Program.DisplayStatistics("while", Program.UseWhile); Program.DisplayStatistics("foreach", Program.UseForEach); Console.WriteLine("Press any key to continue..."); Console.ReadKey(true); } private static void DisplayStatistics(string name, Action action) { Console.WriteLine("--- " + name + " ---"); Program.countCalls = 0; Program.countYieldReturns = 0; var measureTime = Stopwatch.StartNew(); action(); measureTime.Stop(); Console.WriteLine(); Console.WriteLine(); Console.WriteLine("The data was called {0} time(s) and yielded {1} item(s) in {2} ms.", Program.countCalls, Program.countYieldReturns, measureTime.ElapsedMilliseconds); Console.WriteLine(); } private static void UseFor() { var cities = Program.ListCities(); for (var i = 0; i < cities.Count(); i++) { var city = cities.ElementAt(i); Console.Write(city + " "); if (city == "Boston") { break; } } } private static void UseForWithList() { var cities = Program.ListCities(); var flushedCities = cities.ToList(); for (var i = 0; i < flushedCities.Count; i++) { var city = flushedCities[i]; Console.Write(city + " "); if (city == "Boston") { break; } } } private static void UseForEach() { foreach (var city in Program.ListCities()) { Console.Write(city + " "); if (city == "Boston") { break; } } } private static void UseWhile() { using (var enumerator = Program.ListCities().GetEnumerator()) { while (enumerator.MoveNext()) { var city = enumerator.Current; Console.Write(city + " "); if (city == "Boston") { break; } } } } private static IEnumerable<string> ListCities() { Program.countCalls++; using (var connection = new SqlConnection("Data Source=mframe;Initial Catalog=AdventureWorks;Integrated Security=True")) { connection.Open(); using (var command = new SqlCommand("select distinct [City] from [Person].[Address] order by [City]", connection)) { using (var reader = command.ExecuteReader(CommandBehavior.SingleResult)) { while (reader.Read()) { Program.countYieldReturns++; yield return reader["City"].ToString(); } } } } } } And the results: --- for --- Abingdon Albany Alexandria Alhambra [...] Bonn Bordeaux Boston The data was called 94 time(s) and yielded 28153 item(s). --- for with list --- Abingdon Albany Alexandria Alhambra [...] Bonn Bordeaux Boston The data was called 1 time(s) and yielded 575 item(s). --- while --- Abingdon Albany Alexandria Alhambra [...] Bonn Bordeaux Boston The data was called 1 time(s) and yielded 47 item(s). --- foreach --- Abingdon Albany Alexandria Alhambra [...] Bonn Bordeaux Boston The data was called 1 time(s) and yielded 47 item(s). LINQ vs. traditional way As for LINQ, you may want to learn functional programming (FP) - not C# FP stuff, but real FP language like Haskell. Functional languages have a specific way to express and present the code. In some situations, it is superior to non-functional paradigms. FP is known being much superior when it comes to manipulating lists ( list as a generic term, unrelated to List<T> ). Given this fact, the ability to express C# code in a more functional way when it comes to lists is rather a good thing. If you're not convinced, compare the readability of code written in both functional and non-functional ways in my previous answer on the subject.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/178218", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/36291/" ] }
178,317
Problem Summary: Long story short, I inherited a code base and a development team I am not allowed to replace and the use of God Objects is a big issue. Going forward, I want to have us re-factor things but I am getting push-back from the teams who want to do everything with God Objects "because its easier" and this means I would not be allowed to re-factor. I pushed back citing my years of development experience, that I'm the new boss who was hired to know these things, etc, and so did the third party offshore companies account sales rep, and this is now at the executive level and my meeting is tomorrow and I want to go in with a lot of technical ammunition to advocate best practices because I feel it will be cheaper in the long run (And I personally feel that is what the third party is worried about) for the company. My issue is from a technical level, I know its good long term but I'm having trouble with the ultra short term and 6 months term, and while its something I "know" I can't prove it with references and cited resources outside of one person (Robert C. Martin, aka Uncle Bob), as that is what I am being asked to do as I have been told having data from one person and only one person (Robert C Martin) is not good enough of an argument. Question: What are some resources I can cite directly (Title, year published, page number, quote) by well known experts in the field that explicitly say this use of "God" Objects/Classes/Systems is bad (or good, since we are looking for the most technically valid solution)? Research I have already done: I have a number of books here and I have searched their indexes for the use of the words "god object" and "god class". I found that oddly its almost never used and the copy of the GoF book I have for example, never uses it (At least according to the index in front of me) but I have found it in the two books below, but I want more I can use. I checked the Wikipedia page for "God Object" and its currently a stub with little reference links so although I personally agree with that it says, it doesn't have much I can use in an environment where personal experience is not considered valid. The book cited is also considered too old to be valid by the people I am debating these technical points with as the argument they are making is that "it was once thought to be bad but nobody could prove it, and now modern software says "god" objects are good to use". I personally believe that this statement is incorrect, but I want to prove the truth, whatever it is. In Robert C Martin's "Agile Principles, Patterns, and Practices in C#" (ISBN: 0-13-185725-8, hardcover) where on page 266 it states "Everybody knows that god classes are a bad idea. We don't want to concentrate all the intelligence of a system into a single object or a single function. One of the goals of OOD is the partitioning and distribution of behavior into many classes and many function." -- And then goes on to say sometimes its better to use God Classes anyway sometimes (Citing micro-controllers as an example). In Robert C Martin's "Clean Code: A Handbook of Agile Software Craftsmanship" page 136 (And only this page) talks about the "God class" and calls it out as a prime example of a violation of the "classes should be small" rule he uses to promote the Single Responsibility Principle" starting on on page 138. The problem I have is all my references and citations come from the same person (Robert C. Martin), and am from the same single person/source. I am being told that because he is just one view, my desire to not use "God Classes" is invalid and not accepted as a standard best practice in the software industry. Is this true? Am I doing things wrong from a technical perspective by trying to keep to the teaching of Uncle Bob? God Objects and Object Oriented Programming and Design: The more I think of this the more I think this is more something you learn when you study OOP and it's never explicitly called out; Its implicit to good design is my thinking (Feel free to correct me, please, as I want to learn), the problem is I "know" this, but but not everybody does, so in this case its not considered a valid argument because I am effectively calling it out as universal truth when in fact most people are statistically ignorant of it since statistically most people are not programmers. Conclusion: I am at a loss on what to search for to get the best additional results to cite, since they are making a technical claim and I want to know the truth and be able to prove it with citations like a real engineer/scientist, even if I am biased against god objects due to my personal experience with code that used them. Any assistance or citations would be deeply appreciated.
The case for any change of practice is made by identifying the pain points created by the existing design. Specifically, you need to identify what is harder than it should be because of the existing design, what is fragile, what is breaking now, what behaviors can't be implemented in a simple manner as a direct (or even somewhat indirect) result of the current implementation, or, in some cases, how performance suffers, how much time it takes to bring a new team member up to speed, etc. Second, working code trumps any arguments about theory or good design. This is true even for bad code, unfortunately. So you're going to have to provide a better alternative, which means you, as the advocate for better patterns and practices, will need to refactor to tease out a better design. Find a narrow, tracer-bullet style plane through the existing design, and implement a solution that, perhaps, for iteration one, keeps the god object implementation working, but defers actual implementation to the new design. Then write some code that takes advantage of this new design, and show off what you win because of this change, whether it's performance, maintainability, features, correction of bugs or race conditions, or reduction of cognitive load for the developer. It's often a challenge to find a small enough surface area to attack in poorly architected systems, it may take longer than you'd like to deliver some initial value, and the initial payoff may not be that impressive to everybody, but you can also work on finding some advocates of your new approach if you pair on it with team members that are at least slightly sympathetic. Lamenting the God Object only works when you're preaching to the choir. It's a tool for naming a problem, and only works for solving it when you've got a receptive audience that's senior and motivated enough to do something about it. Fixing the God object wins the argument. Since your immediate concern appears to be executive buy-in, I think you're best off making a case that replacing this code needs to be a strategic goal and tie those to the business objectives that you're responsible for. I think you can make a case that you can provide some technical direction by first working on a technical spike on what you think should be done to replace it, preferably involving resources from one or two technical people that have reservations about the current design. I think you've found enough resources to justify your argument; people in such meetings will only pay attention to summary of your research, and they'll stop listening after you mention two or three corroborating sources. Your focus initially should be in getting buy-off to work the problem you see, not necessarily proving someone else wrong or yourself right. This is a social problem, not a logical one. In a technology leadership role, you need to tie any of your initiatives to business goals, so the most important thing for making your case to executives is what the work will do for those objectives. Because you're also considered the "new guy," you can't just expect people to throw away their work or expect to rapidly fall in line; you need to build some trust by proving that you can deliver. As a long term concern, in a leadership role, you also need to learn to become focused on results, but not necessarily be attached to the specifics of the outcome. You're now there to provide strategic direction, remove tactical obstacles from progress by your team, and offer your team mentorship, not win battles of credibility with your own team members. Making a top-down decision will rarely be credible unless you have some skin in the game; if you are in a similar situation all over again, you should focus more on consensus building within your organization rather than escalating once you feel the situation is out of control. But considering where you are now, I'd say your best bet is to argue that your approach will bring measurable long-term benefits based on your experience and that it's in line with the work by well-known practitioners like uncle Bob and co., and that you'd like to spend a few days/weeks leading by example on a narrow refactoring of the highest bang-for-buck aspect, to demonstrate what your view of good design should look like. You'll need to align whatever your case is to specific business goals beyond your personal preferences, however.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/178317", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/74212/" ] }
178,352
I find myself pondering over this question from time to time, again and again. I want to do things the right way: to write clean, understandable and correct code that is easy to maintain. However, what I end up doing is writing patch upon a patch; just because there is no time, clients are waiting, a bug should be fixed overnight, the company is losing money on this problem, a manager is pressing hard etc., etc. I know perfectly well that in the long term I am wasting more time on these patches, but as this time spans months of work, nobody cares. Also, as one of my managers used to say: "we don't know if there will be a long term if we don't fix it now." I am sure I am not the only one entrapped in these endless real/ideal choice cycles. So how do you, my fellow programmers, cope with this? UPDATE: Thank you all for this interesting discussion. It is sad that so many people have to choose daily between a quantity and a quality of their code. Still, surprisingly many, people think it is possible to win this battle, so thank you all for this encouragement.
Actually, this is a very difficult question because there is no absolutely right answer. In our organization we have been putting better processes in place to produce better code. We updated our coding standards to reflect how we, as a group, write code, and we have instituted a very strong test/refactor/design/code loop. We deliver continually or at least try to. At the very least, we have something to show the stakeholders every two weeks. We feel that we are software craftsmen and morale is high. But, despite all these checks and balances, we suffer from the same problem you do. At the end of the day, we are delivering a product to a paying customer. This customer has needs and expectations, realistic or not. Often the sales team gets us into trouble just to get a commission. Sometimes the customer has go-live expectations that are unrealistic or demand change even though we have a contract in place. Timelines happen. PTO and lost days during a sprint can happen. All sorts of little things can culminate in a situation where we are forced into the conundrum of "do it right" or "do it ASAP." Almost always, we are forced to "do it ASAP." As software craftsmen, developers, programmers, people who code for a job -- it is our natural inclination to "do it right." "Do it ASAP" is what happens when we work to survive, as most of us do. The balance is hard. I always start by approaching executive management (I am Director of Software Development and an active developer in that group) to defend the schedule, team and work being done. Usually at that point I'm told the customer has to have it now and it has to work. When I know there is no room for negotiation or give, I go back and work with the team to see what corners can be cut. I won't sacrifice quality in the feature that is driving the customer's need to get it ASAP, but something will go and it will get pushed into another sprint. This is almost always OK. When you are unable to deliver because there are so many bugs, code quality is bad and getting worse, and timelines are getting shorter, then you are in a different situation than what I describe. In that case, current or past mismanagement, bad development practices that led to poor code quality, or other factors may be taking you on a death march. My opinion here is to do your best to defend good code and best practices to start pulling your company out of the trenches. If there isn't a single colleague willing to listen or go to bat for the group against management, then it might be time to start looking for a new job. In the end, real life trumps all. If you are working for a company that needs to sell what you are developing, then you will encounter this trade-off daily. Only by striving to achieve good development principles early on have I been successful at staying ahead of the code quality curve. The push and pull between developers and salesmen reminds me of a joke. "What's the difference between a used car salesman and a software salesman? At least the used car salesman knows he is lying." Keep your chin up and try to "do the right thing" as you go.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/178352", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/51345/" ] }
178,402
So I got started with a real project of mine on GitHub and things are going pretty well and ideas are flowing a lot faster than I initially thought. In order to keep things organized, I setup some branches so I can develop different features separately. Now when I push my branch to GitHub, I have that section where I have two buttons : Pull Request and Compare with the name of the branch I recently pushed to. I understand the purpose of the Compare button but I don't get why I would want to create a pull request on my own repo. Can someone explain me why I would do that? Is it useful to make pull request on my own repo if I am the only developer?
For many (perhaps most) individual developers working on their own, creating pull requests is probably not worthwhile. However, I can think of at least one potential reason to do it: Pull requests can be used to keep track of your project history more easily. A pull request has an issue ID which can be referred to from commit messages and in a change-log, which allows you to easily go back and find the merge point and set of merged commits for a particular change, without having to retain your feature branches indefinitely. For example, in Pioneer (shameless plug), when we merge a pull request, we add an item to the changelog , with a one-line description of the change and a reference to the pull request ID. Of course, Pioneer has several developers, but the same mechanism could be useful for a developer working on his or her own. This may be less useful if you decide to stick to a linear commit history (by rebasing your feature branches before merge, so that the merge can always be performed as a fast-forward), and if you are very disciplined about editing and squashing your commits before merging to master, because in that case the individual commit messages can be used as a changelog in themselves.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/178402", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/38148/" ] }
178,488
I am trying to understand the SOLID principles of OOP and I've come to the conclusion that LSP and OCP have some similarities (if not to say more). the open/closed principle states "software entities (classes, modules, functions, etc.) should be open for extension, but closed for modification". LSP in simple words states that any instance of Foo can be replaced with any instance of Bar which is derived from Foo and the program will work the same very way. I'm not a pro OOP programmer, but it seems to me that LSP is only possible if Bar , derived from Foo does not change anything in it but only extends it. That means that in particular program LSP is true only when OCP is true and OCP is true only if LSP is true. That means that they are equal. Correct me if I'm wrong. I really want to understand these ideas. Great thanks for an answer.
Gosh, there are some weird misconceptions on what OCP and LSP and some are due to mismatch of some terminologies and confusing examples. Both principles are only the "same thing" if you implement them the same way. Patterns usually follow the principles in one way or another with few exceptions. The differences will be explained further down but first let us take a dive into the principles themselves: Open-Closed Principle (OCP) According to Uncle Bob : You should be able to extend a classes behavior, without modifying it. Note that the word extend in this case doesn't necessarily mean that you should subclass the actual class that needs the new behavior. See how I mentioned at first mismatch of terminology? The keyword extend only means subclassing in Java, but the principles are older than Java. The original came from Bertrand Meyer in 1988: Software entities (classes, modules, functions, etc.) should be open for extension, but closed for modification. Here it is much clearer that the principle is applied to software entities . A bad example would be override the software entity as you're modifying the code completely instead of providing some point of extension. The behavior of the software entity itself should be extensible and a good example of this is implementation of the Strategy-pattern (because it is the easiest to show of the GoF-patterns bunch IMHO): // Context is closed for modifications. Meaning you are // not supposed to change the code here. public class Context { // Context is however open for extension through // this private field private IBehavior behavior; // The context calls the behavior in this public // method. If you want to change this you need // to implement it in the IBehavior object public void doStuff() { if (this.behavior != null) this.behavior.doStuff(); } // You can dynamically set a new behavior at will public void setBehavior(IBehavior behavior) { this.behavior = behavior; } } // The extension point looks like this and can be // subclassed/implemented public interface IBehavior { public void doStuff(); } In the example above the Context is locked for further modifications. Most programmers would probably want to subclass the class in order to extend it but here we don't because it assumes it's behavior can be changed through anything that implements the IBehavior interface. I.e. the context class is closed for modification but open for extension . It actually follows another basic principle because we're putting the behavior with object composition instead of inheritance: "Favor ' object composition ' over ' class inheritance '." (Gang of Four 1995:20) I'll let the reader read up on that principle as it is outside the scope of this question. To continue with the example, say we have the following implementations of the IBehavior interface: public class HelloWorldBehavior implements IBehavior { public void doStuff() { System.println("Hello world!"); } } public class GoodByeBehavior implements IBehavior { public void doStuff() { System.out.println("Good bye cruel world!"); } } Using this pattern we can modify the behavior of the context at runtime, through the setBehavior method as extension point. // in your main method Context c = new Context(); c.setBehavior(new HelloWorldBehavior()); c.doStuff(); // prints out "Hello world!" c.setBehavior(new GoodByeBehavior()); c.doStuff(); // prints out "Good bye cruel world!" So whenever you want to extend the "closed" context class, do it by subclassing it's "open" collaborating dependency. This is clearly not the same thing as subclassing the context itself yet it is OCP. LSP makes no mention about this either. Extending with Mixins Instead of Inheritance There are other ways to do OCP other than subclassing. One way is to keep your classes open for extension through the use of mixins . This is useful e.g. in languages that are prototype-based rather than class-based. The idea is to amend a dynamic object with more methods or attributes as needed, in other words objects that blends or "mixes in" with other objects. Here is a javascript example of a mixin that renders a simple HTML template for anchors: // The mixin, provides a template for anchor HTML elements, i.e. <a> var LinkMixin = { render: function() { return '<a href="' + this.link +'">' + this.content + '</a>; } } // Constructor for a youtube link var YoutubeLink = function(content, youtubeId) { this.content = content; this.setLink(this.youtubeId); }; // Methods are added to the prototype YoutubeLink.prototype = { setLink: function(youtubeid) { this.link = 'http://www.youtube.com/watch?v=' + youtubeid; } }; // Extend YoutubeLink prototype with the LinkMixin using // underscore/lodash extend _.extend(YoutubeLink.protoype, LinkMixin); // When used: var ytLink = new YoutubeLink("Cool Movie!", "idOaZpX8lnA"); console.log(ytLink.render()); // will output: // <a href="http://www.youtube.com/watch?=vidOaZpX8lnA">Cool Movie!</a> The idea is to extend the objects dynamically and the advantage of this is that objects may share methods even if they are in completely different domains. In the above case you can easily create other kinds of html anchors by extending your specific implementation with the LinkMixin . In terms of OCP, the "mixins" are extensions. In the example above the YoutubeLink is our software entity that is closed for modification, but open for extensions through the use of mixins. The object hierarchy is flattened out which makes it impossible to check for types. However this is not really a bad thing, and I'll explain in further down that checking for types is generally a bad idea and breaks the idea with polymorphism. Note that it is possible to do multiple inheritance with this method as most extend implementations can mix-in multiple objects: _.extend(MyClass, Mixin1, Mixin2 /* [, ...] */); The only thing you need to keep in mind is to not collide the names, i.e. mixins happen to define the same name of some attributes or methods as they will be overridden. In my humble experience this is a non-issue and if it does happen it is an indication of flawed design. Liskov's Substitution Principle (LSP) Uncle Bob defines it simply by: Derived classes must be substitutable for their base classes. This principle is old, in fact Uncle Bob's definition doesn't differentiate the principles as that makes LSP still closely related to OCP by the fact that, in the above Strategy example, the same supertype is used ( IBehavior ). So lets look at it's original definition by Barbara Liskov and see if we can find out something else about this principle that looks like a mathematical theorem: What is wanted here is something like the following substitution property: If for each object o1 of type S there is an object o2 of type T such that for all programs P defined in terms of T , the behavior of P is unchanged when o1 is substituted for o2 then S is a subtype of T . Lets shrug on this for a while, notice as it doesn't mention classes at all. In JavaScript you can actually follow LSP even though it is not explicitly class-based. If your program has a list of at least a couple of JavaScript objects that: needs to be computed the same way, have the same behavior, and are otherwise in some way completely different ...then the objects are regarded as having the same "type" and it doesn't really matter for the program. This is essentially polymorphism . In generic sense; you shouldn't need to know the actual subtype if you're using it's interface. OCP does not say anything explicit about this. It also actually pinpoints a design mistake most novice programmers do: Whenever you're feeling the urge to check the subtype of an object, you're most likely doing it WRONG. Okay, so it might not be wrong all the time but if you have the urge to do some type checking with instanceof or enums, you might be doing the program a bit more convoluted for yourself than it needs to be. But this is not always the case; quick and dirty hacks to get things working is an okay concession to make in my mind if the solution is small enough, and if you practice merciless refactoring , it may get improved once changes demand it. There are ways around this "design mistake", depending on the actual problem: The super class is not calling the prerequisites, forcing the caller to do so instead. The super class is missing a generic method that the caller needs. Both of these are common code design "mistakes". There are a couple of different refactorings you can do, such as pull-up method , or refactor to a pattern such the Visitor pattern . I actually like the Visitor pattern a lot as it can take care of large if-statement spaghetti and it is simpler to implement than what you'd think on existing code. Say we have the following context: public class Context { public void doStuff(string query) { // outcome no. 1 if (query.Equals("Hello")) { System.out.println("Hello world!"); } // outcome no. 2 else if (query.Equals("Bye")) { System.out.println("Good bye cruel world!"); } // a change request may require another outcome... } } // usage: Context c = new Context(); c.doStuff("Hello"); // prints "Hello world" c.doStuff("Bye"); // prints "Bye" The outcomes of the if-statement can be translated into their own visitors as each is depending on some decision and some code to run. We can extract these like this: public interface IVisitor { public bool canDo(string query); public void doStuff(); } // outcome 1 public class HelloVisitor implements IVisitor { public bool canDo(string query) { return query.Equals("Hello"); } public void doStuff() { System.out.println("Hello World"); } } // outcome 2 public class ByeVisitor implements IVisitor { public bool canDo(string query) { return query.Equals("Bye"); } public void doStuff() { System.out.println("Good bye cruel world"); } } At this point, if the programmer did not know about the Visitor pattern, he'd instead implement the Context class to check if it is of some certain type. Because the Visitor classes have a boolean canDo method, the implementor can use that method call to determine if it is the right object to do the job. The context class can use all visitors (and add new ones) like this: public class Context { private ArrayList<IVisitor> visitors = new ArrayList<IVisitor>(); public Context() { visitors.add(new HelloVisitor()); visitors.add(new ByeVisitor()); } // instead of if-statements, go through all visitors // and use the canDo method to determine if the // visitor object is the right one to "visit" public void doStuff(string query) { for(IVisitor visitor : visitors) { if (visitor.canDo(query)) { visitor.doStuff(); break; // or return... it depends if you have logic // after this foreach loop } } } // dynamically adds new visitors public void addVisitor(IVisitor visitor) { if (visitor != null) visitors.add(visitor); } } Both patterns follow OCP and LSP, however they are both pinpointing different things about them. So how does code look like if it violates one of the principles? Violating one principle but following the other There are ways to break one of the principles but still have the other be followed. The examples below seem contrived, for good reason, but I've actually seen these popping up in production code (and even worser): Follows OCP but not LSP Lets say we have the given code: public interface IPerson {} public class Boss implements IPerson { public void doBossStuff() { ... } } public class Peon implements IPerson { public void doPeonStuff() { ... } } public class Context { public Collection<IPerson> getPersons() { ... } } This piece of code follows the open-closed principle. If we're calling the context's GetPersons method, we'll get a bunch of persons all with their own implementations. That means that IPerson is closed for modification, but open for extension. However things take a dark turn when we have to use it: // in some routine that needs to do stuff with // a collection of IPerson: Collection<IPerson> persons = context.getPersons(); for (IPerson person : persons) { // now we have to check the type... :-P if (person instanceof Boss) { ((Boss) person).doBossStuff(); } else if (person instanceof Peon) { ((Peon) person).doPeonStuff(); } } You have to do type checking and type conversion! Remember how I mentioned above how type checking is a bad thing ? Oh no! But fear not, as also mentioned above either do some pull-up refactoring or implement a Visitor pattern. In this case we can simply do a pull up refactoring after adding a general method: public class Boss implements IPerson { // we're adding this general method public void doStuff() { // that does the call instead this.doBossStuff(); } public void doBossStuff() { ... } } public interface IPerson { // pulled up method from Boss public void doStuff(); } // do the same for Peon The benefit now is that you don't need to know the exact type anymore, following LSP: // in some routine that needs to do stuff with // a collection of IPerson: Collection<IPerson> persons = context.getPersons(); for (IPerson person : persons) { // yay, no type checking! person.doStuff(); } Follows LSP but not OCP Lets look at some code that follows LSP but not OCP, it is kind of contrived but bear with me on this one it's very subtle mistake: public class LiskovBase { public void doStuff() { System.out.println("My name is Liskov"); } } public class LiskovSub extends LiskovBase { public void doStuff() { System.out.println("I'm a sub Liskov!"); } } public class Context { private LiskovBase base; // the good stuff public void doLiskovyStuff() { base.doStuff(); } public void setBase(LiskovBase base) { this.base = base } } The code does LSP because the context can use LiskovBase without knowing the actual type. You'd think this code follows OCP as well but look closely, is the class really closed ? What if the doStuff method did more than just print out a line? The answer if it follows OCP is simply: NO , it isn't because in this object design we're required to override the code completely with something else. This opens up the cut-and-paste can of worms as you have to copy code over from the base class to get things working. The doStuff method sure is open for extension, but it wasn't completely closed for modification. We can apply the Template method pattern on this. The template method pattern is so common in frameworks that you might have been using it without knowing it (e.g. java swing components, c# forms and components, etc.). Here is that one way to close the doStuff method for modification and making sure it stays closed by marking it with java's final keyword. That keyword prevents anyone from subclassing the class further (in C# you can use sealed to do the same thing). public class LiskovBase { // this is now a template method // the code that was duplicated public final void doStuff() { System.out.println(getStuffString()); } // extension point, the code that "varies" // in LiskovBase and it's subclasses // called by the template method above // we expect it to be virtual and overridden public string getStuffString() { return "My name is Liskov"; } } public class LiskovSub extends LiskovBase { // the extension overridden // the actual code that varied public string getStuffString() { return "I'm sub Liskov!"; } } This example follows OCP and seems silly, which it is, but imagine this scaled up with more code to handle. I keep seeing code deployed in production where subclasses completely override everything and the overridden code is mostly cut-n-pasted between implementations. It works, but as with all code duplication is also a set-up for maintenance nightmares. Conclusion I hope this all clears out some questions regarding OCP and LSP and the differences/similarities between them. It is easy to dismiss them as the same but the examples above should show that they aren't. Do note that, gathering from above sample code: OCP is about locking the working code down but still keep it open somehow with some kind of extension points. This is to avoid code duplication by encapsulating the code that changes as with the example of Template Method pattern. It also allows for failing fast as breaking changes are painful (i.e. change one place, break it everywhere else). For the sake of maintenance the concept of encapsulating change is a good thing, because changes always happen. LSP is about letting the user handle different objects that implement a supertype without checking what the actual type they are. This is inherently what polymorphism is about. This principle provides an alternative to do type-checking and type-conversion, that can get out of hand as the number of types grow, and can be achieved through pull-up refactoring or applying patterns such as Visitor.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/178488", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/61969/" ] }
178,733
I'm an aspiring game developer, I work on occasional indie games, and for a while I've been doing something which seemed like a bad practice at first, but I really want to get an answer from some experienced programmers here. Let's say I have a file called enumList.h where I declare all the enums I want to use in my game: // enumList.h enum materials_t { WOOD, STONE, ETC }; enum entity_t { PLAYER, MONSTER }; enum map_t { 2D, 3D }; // and so on. // Tile.h #include "enumList.h" #include <vector> class tile { // stuff }; The main idea is that I declare all enums in the game in 1 file , and then import that file when I need to use a certain enum from it, rather than declaring it in the file where I need to use it. I do this because it makes things clean, I can access every enum in 1 place rather than having pages openned solely for accessing one enum. Is this a bad practice and can it affect performance in any way?
I really think it is a bad practice. When we measure code quality, there's something we call "granularity". Your granularity suffers severely by putting all those enums in one single file and thus maintainability suffers, too. I'd have a single file per enum to find it quickly and group it with the behavioural code of the specific functionality (e.g. materials enum in folder where material behaviour is etc.); The main idea is that I declare all enums in the game in 1 file, and then import that file when I need to use a certain enum from it, rather than declaring it in the file where I need to use it. I do this because it makes things clean, I can access every enum in 1 place rather than having pages openned solely for accessing one enum. You might think it is clean, but in fact, it is not. It's coupling things that do not belong together functionality- and module-wise and decreases the modularity of your application. Depending on the size of your code base and how modular you want your code to be structured, this might evolve into a bigger problem and unclean code/dependencies in other parts of your system. However, if you just write a small, monolithic system, this does not necessarily apply. Yet, I would not do it like this even for a small monolithic system.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/178733", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/47546/" ] }
178,856
I'm in transition from "writing unit tests" state to TDD. I saw as Johannes Brodwall creates quite acceptable design from avoiding any of architecture phase before. I'll ask him soon if it was real improvisation or he had some thoughts upfront. I also clearly understand that everyone has experience that prevents to write explicit design bad patterns. But after participating in code retreat I hardly believe that writing test first could save us from mistakes. But I also believe that tests after code will lead to mistakes much faster. So this night question is asking for people who is using TDD for a long time share their experience about results of design without upfront thinking. If they really practice it and get mostly suitable design. Or it's my small understanding about TDD and probably agile.
From experience: TDD does not necessarily lead to good design. It's possible and really easy to get poorly designed program using TDD. TDD is just a tool to help us design faster using refactoring , it will never make the design of the program appear magically. TDD is a design help tool. The quality of the design you will get out of TDD depend largely on the capacity of the developer to use refactoring to Design Patterns, or refactoring to SOLID principles. The developer will make the design emerge using continuous refactoring. It's the most important aspect of TDD: Refactoring . Applying TDD without doing constant refactoring will often lead to really poorly designed systems which is worst than applying BDUF. TDD is often associated with the notion of " emergent design ". In agile, you often build your software incrementaly, feature by feature. So you can't know right from the start what architecture you will need, it will evolves/emerge with time. So any time you add a new piece of functionality you do some refactoring to improve the design of your application. It's continuous/incremental design. That's why TDD is key in a agile processes. BDUF is not incompatible with TDD. There is nothing wrong with starting a piece of sofware while having the design already in mind. TDD will then enable you to put that design in place quickly. And in the case the design you thought about was wrong, TDD will allow you to refactor it nicely and safely. Again, it's just a tool, it's there to help us develop our ideas faster and design stuff safely and faster. So you can either do BDUF+TDD or Emergent Design+TDD, the later is the more common in the agile community because of the iterative way of working. In all cases you should never try to do emergent design without being willing to do some constant refactoring, they both go together and It does really requires a lot of discipline. Things can quickly spin out of control if you keep adding new features without applying Refactoring. Refactoring apply to both production code and test code. An interesting article to read to get more insight on the question can be found here: Learning From Sudoku Solvers
{ "source": [ "https://softwareengineering.stackexchange.com/questions/178856", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/45935/" ] }
178,927
I have a little problem with the terms module and component. In my mind, a module are bundled classes, which are only accesable via a well defined interface. They hide all implementation details and are reusable. Modules define modules on which they depend. What is the difference to components? I looked it up in some books, but the description of components is very similar.
The terms are similar. I generally think of a "module" as being larger than a "component". A component is a single part, usually relatively small in scope, possibly general-purpose. Examples include UI controls and "background components" such as timers, threading assistants etc. A "module" is a larger piece of the whole, usually something that performs a complex primary function without outside interference. It could be the class library of an application that provides integration with e-mail or the database. It may be as large as a single application of a suite, such as the "Accounts Receivable module" of an ERP/accounting platform. I also think of "modules" as being more interchangeable. Components can be replicated, with new ones looking like old ones but being "better" in some way, but typically the design of the system is more strictly dependent upon a component (or a replacement designed to conform to that component's very specific behavior). In non-computer terms, a "component" may be the engine block of a car; you can tinker within the engine, even replace it entirely, but the car must have an engine, and it must conform to very rigid specifications such as dimensions, weight, mounting points, etc in order to replace the "stock" engine which the car was originally designed to have. A "module", on the other hand, implies "plug-in"-type functionality; whatever that module is, it can be communicated with in such a lightweight way that the module can be removed and/or replaced with minimal effect on other parts of the system. The electrical system of a house is highly modular; you can plug anything with a 120V15A plug into any 120V15A receptacle and expect the thing you're plugging in to work. The house wiring couldn't care less what's plugged in where, provided the power demands in any single branch of the system don't exceed safe limits.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/178927", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/73457/" ] }
178,941
On the Wikipedia page for Windows , it states the Windows is written in Assembly for the bootloader and task switcher, and C and C++ for kernel routines. IIRC, you can call C++ functions from an extern "C" 'd block. I can get using C for the kernel functions so pure C apps can use them (like printf and such), but if they can just be wrapped in an extern "C " block, then why code in C?
It's mostly for historical reasons. Some parts of the windows kernel were originally written in C, because 1983, over three decades ago, when Windows 1.0 was unleashed , C++ was barely released. Now these C-libraries will stay there "forever", because Microsoft made backward-compatibility a selling point and rewriting a bug-compatible version of the C-parts in C++ requires an awful lot of effort for no effective benefit.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/178941", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/54480/" ] }
178,949
We are on the verge of a conversion. For years, our company supported only IE for its internal (intranet) home-built tools. Since a few of our users are still on XP, which means IE only goes up to 8... a heavily JS / jQuery site wont even load! We have been in the process of converting to use Chrome instead, to make use of its javascript performance. But, it has now been suggested that we support all common browsers... internally for these tools. Which means more development time to scale-back some of these new applications, more time to test in all browsers, and we are already under staffed. Are there any good informational sites/posts out there, that already make this argument?
It's mostly for historical reasons. Some parts of the windows kernel were originally written in C, because 1983, over three decades ago, when Windows 1.0 was unleashed , C++ was barely released. Now these C-libraries will stay there "forever", because Microsoft made backward-compatibility a selling point and rewriting a bug-compatible version of the C-parts in C++ requires an awful lot of effort for no effective benefit.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/178949", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/74768/" ] }
179,096
I've been taking a look at Clojure lately and I stumbled upon this post on Stackoverflow that indicates some projects following best practices, and overall good Clojure code. I wanted to get my head around the language after reading some basic tutorials so I took a look at some "real-world" projects. After looking at ClojureScript and Compojure (two the the aforementioned "good" projects), I just feel like Clojure is a joke. I don't understand why someone would pick Clojure over say, Ruby or Python, two languages that I love and have such a clean syntax and are very easy to pick up whereas Clojure uses so much parenthesis and symbols everywhere that it ruins the readability for me. I think that Ruby and Python are beautiful, readable and elegant. They are easy to read even for someone who does not know the language inside out. However, Clojure is opaque to me and I feel like I must know every tiny detail about the language implementation in order to be able to understand any code. So please, enlighten me! What is so good about Clojure? What is the absolute minimum that I should know about the language in order to appreciate it?
For the background you gave, if I may paraphrase: You're familiar with Ruby/Python. You don't see the advantages of Clojure yet. You don't find either Lisp or Clojure syntax clear. ...I think the best answer is to read the book Clojure Programming by Emerick, Carper and Grand. The book has numerous explicit code comparisons with Python, Ruby, and Java, and has text explanations addressing coders from those languages. Personally, I came to Clojure after building good-sized projects w/ Python, and having some Lisp experience; reading that book helped convince me to start using Clojure not just in side-projects but for professional ends. To address your two questions directly: What's so good about Clojure? Plenty of answers on this site and elsewhere, e.g. see https://www.quora.com/Why-would-someone-learn-Clojure What is the absolute minimum that I should know about the language in order to appreciate it? I'd suggest knowing the big ideas behind Clojure's design, as articulated in both Clojure Programming and The Joy of Clojure books-wise, and in Rich Hickey's talks, esp. the talk Simple Made Easy . Once you know the what/why you can then start understanding the how when reading Clojure code, esp. how to change your thinking from classes, objects, state/mutation to "just functions and data" (higher-order functions, maps/sets/sequences, types). Additional suggestions: Lisp's elegance and power is partly from its minimalist and utterly consistent syntax. It's much easier to appreciate that with a good editor, e.g. Emacs with clojure-mode and ParEdit. As you get more familiar with it, the syntax fades away and you'll "see" semantics, intentions, and concise abstractions. Secondly, don't start by reading the source for ClojureScript or Compojure, those are too-much-at-once; try some 4clojure.org problems, and compare solutions with the top coders there. If you see 4-6 other solutions, invariably someone will have written a truly idiomatic, succinct FP-style solution which you can compare with a clumsy, verbose, and needlessly complicated imperative-style solution.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/179096", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/38148/" ] }
179,253
I wrote an open source library that parses structured data but intentionally left out carriage-return detection because I don't see the point. It adds additional complexity and overhead for little/no benefit. To my surprise, a user submitted a bug where the parser wasn't working and I discovered the cause of the issue was that the data used CR line endings as opposed to LF or CRLF. Hasn't OSX been using LF style line-endings since switching over to a unix-based platform? I know there are applications like Notepad++ where line endings can be changed to use CR explicitly but I don't see why anybody would want to. Is it safe to exclude support for the statistically insignificant percentage of users who decide (for whatever reason) to the old Mac OS style line-endings? Update: To clarify, supporting Windows line endings (ie CRLF) doesn't require CR token recognition. For efficiency purposes the lexer matches on a per-char basis. By silently ignoring CR chars, the CRLF token simplifies to LF. As such, the CRLF token itself could be considered an anachronism all its own but that's not what this question is about. The last OS that provided system-wide support for CR style line endings was Mac OS 9 . Ironically, the only application that still uses it as the default in OSX is Microsoft Excel.
There is a good practice where you are "liberal in what you accept, and conservative in what you send" . In other words, if there is a chance (however small it will be) that someone will give you a cr line ending (and expect it to work correctly) , you'll need to support it. TBH, I can't see how adding CR support would take all that long. When you see a cr in the lexer peek the next character and if it is a nl , swallow the newline and emit a newline token, if the next character isn't a nl just emit a newline token and continue.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/179253", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/1256/" ] }
179,261
I work on a few high traffic websites that all share the same database and that are all heavily database driven. Our SQL server is max-ed out and, although we have already implemented many changes that have helped but the server is still working too hard. We employ some caching in our website but the type of queries we use negate using SQL dependency caching. We tried SQL replication to try and kind of load balance but that didn't prove very successful because the replication process is quite demanding on the servers too and it needed to be done frequently as it is important that data is up to date. We do use a Varnish web caching server (Linux based) to take a bit of the load off both the web and database server but as a lot of the sites are customised based on the user we can only do so much. Anyway, the reason for this question... Varnish gave me an idea for a possible application that might help in this situation. Just like Varnish sits between a web browser and the web server and caches response from the web server, I was wondering about the possibility of creating something that sits between the web server and the database server. Imagine that all SQL queries go through this SQL caching server. If it's a first time query then it will get recorded, and the result requested from the SQL server and stored locally on the cache server. If it's a repeat request within a set time then the result gets retrieved from the local copy without the query being sent to the SQL server. The caching server could also take advantage of SQL dependency caching notifications. This seems like a good idea in theory. There's still the same amount of data moving back and forward from the web server, but the SQL server is relieved of the work of processing the repeat queries. I wonder about how difficult it would be to build a service that sort of emulates requests and responses from SQL server, whether SQL server's own caching is doing enough of this already that this wouldn't be a benefit, or even if someone has done this before and I haven't found it? I would welcome any feedback or any references to any relevant projects.
There is a good practice where you are "liberal in what you accept, and conservative in what you send" . In other words, if there is a chance (however small it will be) that someone will give you a cr line ending (and expect it to work correctly) , you'll need to support it. TBH, I can't see how adding CR support would take all that long. When you see a cr in the lexer peek the next character and if it is a nl , swallow the newline and emit a newline token, if the next character isn't a nl just emit a newline token and continue.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/179261", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/65565/" ] }
179,269
Consider the following enum and switch statement: typedef enum { MaskValueUno, MaskValueDos } testingMask; void myFunction(testingMask theMask) { switch (theMask) { case MaskValueUno: {}// deal with it case MaskValueDos: {}// deal with it default: {} //deal with an unexpected or uninitialized value } }; I'm an Objective-C programmer, but I've written this in pure C for a wider audience. Clang/LLVM 4.1 with -Weverything warns me at the default line: Default label in switch which covers all enumeration values Now, I can sort of see why this is there: in a perfect world, the only values entering in the argument theMask would be in the enum, so no default is necessary. But what if some hack comes along and throws an uninitialized int into my beautiful function? My function will be provided as a drop in library, and I have no control over what could go in there. Using default is a very neat way of handling this. Why do the LLVM gods deem this behaviour unworthy of their infernal device? Should I be preceding this by an if statement to check the argument?
Here's a version that suffers from neither the problem clang's reporting or the one you're guarding against: void myFunction(testingMask theMask) { assert(theMask == MaskValueUno || theMask == MaskValueDos); switch (theMask) { case MaskValueUno: {}// deal with it case MaskValueDos: {}// deal with it } } Killian has explained already why clang emits the warning: if you extended the enum, you'd fall into the default case which probably isn't what you want. The correct thing to do is to remove the default case and get warnings for unhandled conditions. Now you're concerned that someone could call your function with a value that's outside the enumeration. That sounds like failing to meet the function's prerequisite: it's documented to expect a value from the testingMask enumeration but the programmer has passed something else. So make that a programmer error using assert() (or NSCAssert() as you said you're using Objective-C). Make your program crash with a message explaining that the programmer is doing it wrong, if the programmer does it wrong.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/179269", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/56221/" ] }
179,275
Since my graduation (late 2005) I was working for the same company as a c++ software engineer. A year ago I was promoted as a software architect but I have found myself involved more and more in qualification and fixing bugs, level 2 support. 50% of my time spent in Notepad++ analysing the software logs and trying to figure out what went wrong. 30% fixing other's bugs and the remaining (if any) reviewing developers spaghetti code. I started hating this product and thinking about an exit strategy out of this company. What do you think I can do in this situation? do you other software architect still fixing bugs in the code?
Most people generally agree that a Software Architect should mostly be involved in high level design, setting standards, choosing tools or frameworks, evaluating products, implementing prototypes and Proof Of Concepts, and training and mentoring developers The reality however is that the title often can be a political appointment to a developer, a special title given to lead developers of projects, or even something as simple as a management-HR workaround to hire a badly needed developer at a salary or rate that HR or upper management would find unacceptable for the Software Developer or Software Engineer title. In other words, titles are mostly meaningless.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/179275", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/40718/" ] }
179,285
I've noticed that in JavaScript, when creating a Date , months are zero based, and days aren't. For example: var foo = new Date(2012, 1, 1) produces February 1st 2012 Why is this?
Most likely the idea is, that the months are thought of as an index into an array of month names, while days are simply "counted".
{ "source": [ "https://softwareengineering.stackexchange.com/questions/179285", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/62300/" ] }
179,390
Scala has no static -keyword, but instead has similar functionality through companion objects. Behind the scenes the companion objects are compiled to classes that have static methods, so all this is syntactic sugar. What are the advantages of this design choice? Disadvantages? Do other languanges have similar constructs?
Here are a few reasons, which might be more or less compelling for you, depending on your own preferences: Do not simply discount it for being "syntactic sugar". While you may say that something is just syntactic sugar, it is after all the sugar that sweetens your life - as a programmer just as well as a coffee or tea drinker. Singletons - every Scala object is inherently a singleton. Considering that in the Java world people are implementing singletons in all sorts of different ways and more often than not end up making some mistake in their implementation, you cannot make an error as simple like that in Scala. Writing object instead of class makes it a singleton and you're done. Access to static methods: The static methods in Java can be accessed from objects. For example, suppose you have a class C with a static method f and an object c of type C . Then you should call C.f , but Java allows you (albeit with a warning) to use c.f , which when you come from the Scala background doesn't really make any sense, because objects do not have a method f really. Clear separation: In Java you can mix static and non-static attributes and methods in a class. If you work disciplined, this doesn't become a problem, however, if you (or someone else for that matter) do not, then you end up with static and non-static parts interleaved and it is hard to tell at a quick glance what's static and what's not. In Scala, everything that's located inside the companion object is cleary not part of the corresponding class's runtime objects, but is available from a static context. Vice versa, if it is written inside a class, it is available to instances of that class, but not from a static context. This becomes especially burdensome in Java, once you start adding static and non-static initializer blocks to your class. This can end up to be very hard to comprehend in terms of dynamic execution order. It's a lot clearer in Scala, where you initialize the companion object from top to bottom and then do the same for the class in case of a runtime object being created. Less code: You don't need to add the word static to each and every attribute or method in an object , thus keeping the code more concise (indeed, not a prominent advantage really). Disadvantages are much harder to find. One might argue, that the static and non-static parts should belong together, but are separated by the Scala concept of companion objects. For example, it may appear strange to have a class diagram, but then end up having to create two things in the code and dissect which attribute goes where.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/179390", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/36383/" ] }
179,468
I have previously forked other people's repos on GitHub, and I have noticed that issues stay with the original repo, and that I can't file issues on the forked repo. I now have the following task. I am working for a small business where development was being done by one of the principals on his personal account. He has amicably left the project, and we would like to migrate that project away from his personal account to a new "role" account on GitHub. I would naturally fork the repo, in order to preserve the code history, but then I'll end up with a repo where we can't file new issues, which is quite undesirable. How can I make a copy of this original repo into our new account, ideally still preserving code history, but be able to file new issues within this new account?
After a quick test, it is possible to attach an issue to your own fork of a repo. Here is what I did : Fork a repo Go to the Settings page of your fork. Check the box next to Issues You can now file issues on your own fork and they will not be placed in the main repo.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/179468", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/38100/" ] }