source_id
int64 1
4.64M
| question
stringlengths 0
28.4k
| response
stringlengths 0
28.8k
| metadata
dict |
---|---|---|---|
211,962 | I'm just wondering if anyone knows about some well designed .NET Open Source Applications using WPF? I have already tried to search at the usual sites like GitHub and Sourceforge, but I wasn't very satisfied by the results at all. Actually I'm interested in the right usage of MVVM, Data Binding etc. in the context of relatively large Software (at least more than the few lines of sample code, you will find at most Tutorials and Books). Also ORM with NHibernate lies in my main concern. Currently I also do have the overall impression WPF is not used very extensively at all, am I right with that?
Which closed source (commercial) Software Products are out there, using it? | In the context of testing tools you mentioned, such as PHPUnit and Fitnesse , this term definitely refers to the notion of test fixture : something used to consistently test some item, device, or piece of software... Software Test fixture refers to the fixed state used as a baseline for running tests in software testing . The purpose of a test fixture is to ensure that there is a well known and fixed environment in which tests are run so that results are repeatable. Some people call this the test context . Examples of fixtures: Loading a database with a specific, known set of data Erasing a hard disk and installing a known clean operating system installation Copying a specific known set of files Preparation of input data and set-up/creation of fake or mock objects ... Use of fixtures Some advantages of fixtures include separation of the test initialization (and destruction) from the testing, reusing a known state for more than one test, and special assumption by the testing framework that the fixture set up works... | {
"source": [
"https://softwareengineering.stackexchange.com/questions/211962",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/102647/"
]
} |
212,128 | This question on SO talks about correcting what the OP thought is feature envy code. Another example where I saw this nifty phrase being quoted is in a recently given answer here in programmers.SE. Although I did drop in a comment to that answer asking for the information I thought it would of general help to programmers following Q&A to understand what is meant by the term feature-envy . Please feel free to edit additional tags if you think appropriate. | Feature envy is a term used to describe a situation in which one object gets at the fields of another object in order to perform some sort of computation or make a decision, rather than asking the object to do the computation itself. As a trivial example, consider a class representing a rectangle. The user of the rectangle may need to know its area. The programmer could expose width and height fields and then do the computation outside of the Rectangle class. Alternatively, Rectangle could keep the width and height fields private and provide a getArea method. This is arguably a better approach. The problem with the first situation, and the reason it is considered a code smell, is because it breaks encapsulation. As a rule of thumb, whenever you find yourself making extensive use of fields from another class to perform any sort of logic or computation, consider moving that logic to a method on the class itself. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/212128",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/60189/"
]
} |
212,300 | I saw this nice image here . I learned that all the compilers that support .net language convert the source code to CIL format. Now Microsoft is never bringing in .NET for all the operating system by writing a CLR for all operating systems. Then why keep such an intermediate code format and a CLR to execute that CIL. Is that not an headache to deal with. Why did Microsoft choose be like this ? EDIT This kinda architecture has its price. It will reduce the performance, wouldn't it ? Java does this to maintain platform independence, for what reason .NET do it ? Why not keep a simple plain C like compiler. Any way will also require a compiler to convert the code to CIL if I need to add up any new language, the only difference it would make is the target language. Tat's all. | Because they only need to write one compiler for C# to CIL - which is the hard part. Making an interpreter (or more often, Just-In-Time compiler) for the CIL per platform is relatively easy compared to writing a compiler from C# to (per platform) executable code. Beyond that, the runtime can handle anything that compiles to CIL. If you want a new language (like F#) you only have to write one compiler for it and you auto-magically get all the platform support for things .NET supports. Oh, and I can take a .NET dll and run that on windows or on linux via Mono without recompilation (assuming all of my dependencies are satisfied). As for performance, it's debatable. There are "pre-compilers" essentially that take the CIL and make native binaries. Others argue that just-in-time compilers can make optimizations that static compilers simply cannot. In my experience, it depends a lot on what your application is doing and what platform you're running it on (mostly how good the JITer is on that platform). It's extremely rare for me to run into a scenario where .NET wasn't good enough . | {
"source": [
"https://softwareengineering.stackexchange.com/questions/212300",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/94067/"
]
} |
212,515 | I found this quote in " The Joy of Clojure " on p. 32, but someone said the same thing to me over dinner last week and I've heard it other places as well: [A] downside to object-oriented programming is the tight
coupling between function and data. I understand why unnecessary coupling is bad in an application. Also I'm comfortable saying that mutable state and inheritance should be avoided, even in Object-Oriented Programming. But I fail to see why sticking functions on classes is inherently bad. I mean, adding a function to a class seems like tagging a mail in Gmail, or sticking a file in a folder. It's an organizational technique that helps you find it again. You pick some criteria, then put like things together. Before OOP, our programs were pretty much big bags of methods in files. I mean, you have to put functions somewhere. Why not organize them? If this is a veiled attack on types, why don't they just say that restricting the type of input and output to a function is wrong? I'm not sure whether I could agree with that, but at least I'm familiar with arguments pro and con type safety. This sounds to me like a mostly separate concern. Sure, sometimes people get it wrong and put functionality on the wrong class. But compared to other mistakes, this seems like a very minor inconvenience. So, Clojure has namespaces. How is sticking a function on a class in OOP different from sticking a function in a namespace in Clojure and why is it so bad? Remember, functions in a class don't necessarily operate just on members of that class. Look at java.lang.StringBuilder - it operates on any reference type, or through auto-boxing, on any type at all. P.S. This quote references a book which I have not read: Multiparadigm Programming in Leda: Timothy Budd, 1995 . | In theory, loose function-data coupling makes it easier to add more functions to work on the same data. The down side is it makes it more difficult to change the data structure itself, which is why in practice, well-designed functional code and well-designed OOP code have very similar levels of coupling. Take a directed acyclic graph (DAG) as an example data structure. In functional programming, you still need some abstraction to avoid repeating yourself, so you're going to make a module with functions to add and delete nodes and edges, find nodes reachable from a given node, create a topological sorting, etc. Those functions are effectively tightly coupled to the data, even though the compiler doesn't enforce it. You can add a node the hard way, but why would you want to? Cohesiveness within one module prevents tight coupling throughout the system. Conversely on the OOP side, any functions other than the basic DAG operations are going to be done in separate "view" classes, with the DAG object passed in as a parameter. It's just as easy to add as many views as you want that operate on the DAG data, creating the same level of function-data decoupling as you would find in the functional program. The compiler won't keep you from cramming everything into one class, but your colleagues will. Changing programming paradigms doesn't change best practices of abstraction, cohesion, and coupling, it just changes which practices the compiler helps you enforce. In functional programming, when you want function-data coupling it's enforced by gentlemen's agreement rather than the compiler. In OOP, the model-view separation is enforced by gentlemen's agreement rather than the compiler. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/212515",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/62323/"
]
} |
212,638 | For instance, a class like: class Dog { } //never mind that there's nothing in it... and then a property like: Dog Dog { get; set; } I've been told that if I can't come up with a more imaginative name for it, then I must use: Dog DogObject { get; set; } Any thoughts on how to name these better? | It could be a bad practice, were it not for the fact that it's already pretty obvious what you're talking about in your code, based on context. Dog = new Dog(); Which is the type constructor? Which is the object? Not confused? OK, how about Dog = Dog.Create(); Which is the object? Which is the static factory method on the type? Still not confused? I didn't think so. The only time I've seen this be a potential problem is when the namespace tree gets fairly elaborate, and the compiler can't figure out the ambiguity, in which case you wind up with something like Dog = new Some.Namespace.Dog(); In any case, this should only happen with Automatic Properties (and perhaps enums), since local variable names are always camelCased, avoiding the ambiguity entirely. dog = new Dog(); | {
"source": [
"https://softwareengineering.stackexchange.com/questions/212638",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/50368/"
]
} |
212,678 | Most unit testing tutorials/examples out there usually involve defining the data to be tested for each individual test. I guess this is part of the "everything should be tested in isolation" theory. However I've found that when dealing with multitier applications with a lot of DI , the code required for setting up each test gets very long winded. Instead I've built a number of testbase classes which I can now inherit which has a lot of test scaffolding pre-built. As part of this, I'm also building fake datasets which represent the database of a running application, albeit with usually only one or two rows in each "table". Is it an accepted practice to predefine, if not all, then the majority of the test data across all the unit tests? Update From the comments below it does feel like I'm doing more integration than unit testing. My current project is ASP.NET MVC, using Unit of Work over Entity Framework Code First, and Moq for testing. I've mocked the UoW, and the repositories, but I'm using the real business logic classes, and testing the controller actions. The tests will often check that the UoW has been committed, e.g: [TestClass]
public class SetupControllerTests : SetupControllerTestBase {
[TestMethod]
public void UserInvite_ExistingUser_DoesntInsertNewUser() {
// Arrange
var model = new Mandy.App.Models.Setup.UserInvite() {
Email = userData.First().Email
};
// Act
setupController.UserInvite(model);
// Assert
mockUserSet.Verify(m => m.Add(It.IsAny<UserProfile>()), Times.Never);
mockUnitOfWork.Verify(m => m.Commit(), Times.Once);
}
} SetupControllerTestBase is building the mock UoW, and instantiating the userLogic . A lot of the tests require having an existing user or product in the database, so I've pre-populated what the mock UoW returns, in this example userData , which is just an IList<User> with a single user record. | Ultimately, you want to write as little code as possible to get as much outcome as possible. Having a lot of the same code in multiple tests a) tends to result in copy-paste coding and b) means that if a method signature changes you can end up having to fix a lot of broken tests. I use the approach of having standard TestHelper classes that provide me with a lot of the data types that I routinely use, so I can create sets of standard entity or DTO classes for my tests to query and know exactly what I will get each time. So I can call TestHelper.GetFooRange( 0, 100 ) to get a range of 100 Foo objects with all their dependent classes/fields set. Particularly where there are complex relationships configured in an ORM type system which need to be present for things to run correctly, but aren't necessarily significant for this test that can save a lot of time. In situations where I'm testing close to the data level, I sometimes create a test version of my repository class that can be queried in a similar way (again this is in an ORM type environment, and it wouldn't be relevant against a real database), because mocking out the exact responses to queries is a lot of work and often only provides minor benefits. There are some things to be careful of, though in unit tests: Make sure your mocks are mocks . The classes that perform operations around the class being tested must be mock objects if you are doing unit testing. Your DTO/entity type classes can be the real thing, but if classes are performing operations you need to be mocking them- otherwise when the supporting code changes and your tests start failing, you have to search for a lot longer to figure out which change actually caused the problem. Make sure you are testing your classes . Sometimes if one looks through a suite of unit tests it becomes apparent that half of the tests are actually testing the mocking framework more than the actual code that they are supposed to be testing. Don't reuse mock/supporting objects This is a biggie- when one starts trying to get clever with code supporting unit tests it is really easy to inadvertently create objects that persist between tests, which can have unpredictable effects. For example, yesterday I had a test that passed when run on its own, passed when all tests in the class were run, but failed when the entire test suite was run. It turned out there was a sneaky static object way off in a test helper that, when I created it, would definitely never have caused a problem. Just remember: At the start of the test, everything is created, at the end of the test everything is destroyed. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/212678",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/15172/"
]
} |
212,720 | From my experience in web development, I know that languages like PHP,Java,Python..etc are used for backend development stuff (software that running on server), and for front end languages, JS/HTML/CSS are used. But I see many companies say that they use, for example, PHP for front-end development, and python for the back-end. Does that mean PHP is front-end for calling other services written in other languages via REST,RPC ..etc? | You have confused the terms "front-end" and "back-end" with "server-side" and "client-side". "Back-end" usually refers to systems which are not directly exposed to the user (database servers, middleware and so on), while "front-end" usually refers to the application (in the case of the Web, this normally means static and dynamic web pages) directly accessed by the client. In a web application, the client (the user's browser), accesses web pages which are stored or dynamically generated "server-side" by "front-end" technologies. Those front-end components may, in turn, pull data or other information from "back-end" components. So a web application written in PHP would be "front-end" but "server-side". However, if the web pages contained any javascript to be executed by the user's browser, that javascript code would be executed "client-side". Hopefully I have removed some confusion, but now I risk creating some more. First, we have AJAX , which is code (usually JavaScript) executed in the client (so client-side), to create the web pages you see by pulling information from Internet-facing services which do not themselves generate web pages. The services are generating their information server-side on the front-end (since they are public and you can point your browser straight at them if you know the url). Secondly JavaScript is not limited to client-side use, of course. It has become increasingly popular as a "server-side" language (see node.js for one example). As such, its most common use is for just the kind of Internet-facing services I described in the preceding paragraph. Things used to be much simpler before Web 2.0 . Back then, in the context of web applications , the front end was where web pages were generated, while JavaScript only ran client-side and made minor cosmetic to web pages like hi-lighting images when you moved the mouse over them. However, that simplicity made people lazy about their definitions. Now the situation is more complex, so it is important to be precise about these terms. (Oh, and if you have to use PHP, please keep it on the front end. It is emphatically not a good back-end technology. And if you ever find anybody creating a browser that executes PHP client-side, shoot them.) | {
"source": [
"https://softwareengineering.stackexchange.com/questions/212720",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/21461/"
]
} |
212,822 | I got to see many designs that normalization wasn't the first consideration in decision making phase. In many cases those designs included more than 30 columns, and the main approach was "to put everything in the same place" According to what I remember normalization is one of the first, most important things, so why is it dropped so easily sometimes? Edit: Is it true that good architects and experts choose a denormalized design while non-experienced developers choose the opposite? What are the arguments against starting your design with normalization in mind? | What's interesting about this Q&A thread is that there are actually 3 questions. Everybody has answered a different one, and almost nobody has answered the first one: Why aren't some databases in the wild normalized? Why/when should a normalized database be denormalized ? In what situations is it harmful or unnecessary to normalize in the first place? Alert readers will note that these are very different questions, and I'll try to answer each of them separately while avoiding too much detail. By "too much", I mean that I don't think this is the appropriate context in which to be carrying out an extended debate over the merits of various arguments in favour of or against normalization; I'm simply going to explain what those arguments are, maybe list a few caveats, and save the philosophy for more specific questions, if they ever come up. Also, in this answer I am assuming that "normalization" implies "BCNF, 3NF, or at least 2NF" , since that's the level of normalization that designers generally aim to achieve. It's rarer to see 4NF or 5NF designs; although they certainly aren't impossible goals, they concern themselves with the semantics of relationships rather than just their representation , which requires considerably more knowledge about the domain. So, onward and upward: 1. Why aren't some databases in the wild normalized? The answer to this could be "because they shouldn't be", but making that assumption right off the bat is pretty piss-poor detective work. We wouldn't make very much progress as a society if we always operated on the assumption that whatever is, ought to be. The real reasons that databases don't get normalized in the first place are more complicated. Here are the top 5 that I've come across: The developers who designed it didn't know or didn't understand how to normalize. Strong evidence of this comes in the form of many other accompanying bad design choices, like using varchar columns for everything or having a spaghetti mess of meaningless table and column names . And I assure you, I've seen "real" databases that are every bit as bad as those in the TDWTF articles. The developers who designed it didn't care or were actively against normalization on principle . Note, here I am not talking about instances where a deliberate decision was made not to normalize based on contextual analysis, but rather teams or companies where normalization is more-or-less understood but simply ignored or shunned out of habit. Again, surprisingly common. The software is/was done as a Brownfield project . Many purists ignore this perfectly legitimate business rather than technical reason for not normalizing. Sometimes you don't actually get to design a new database from scratch, you have to bolt on to an existing legacy schema, and attempting to normalize at that point would involve far too much pain. 3NF wasn't invented until 1971, and some systems - especially financial/accounting systems - have their roots even farther back than that! The database was originally normalized , but an accumulation of small changes over a long period of time and/or a widely distributed team introduced subtle forms of duplication and other violations of whatever normal form was originally in place. In other words, the loss of normalization was accidental , and too little time was spent on refactoring. A deliberate business decision was made not to spend any time on business analysis or database design and just "get it done". This is often a false economy and ultimately becomes a mounting form of technical debt , but is sometimes a rational decision, at least based on information that was known at the time - for example, the database may have been intended as a prototype but ended up being promoted to production use due to time constraints or changes in the business environment. 2. Why/when should a normalized database be denormalized? This discussion often comes up when a database is normalized to start with. Either the performance is poor or there is a lot of duplication in queries (joins), and the team feels, rightly or wrongly, that they've gone as far as they can with the current design. It is important to note that normalization improves performance most of the time, and there are several options to eliminate excess joins when normalization appears to be working against you, many of which are less invasive and risky than simply changing to a denormalized model: Create indexed views that encapsulate the most common problem areas. Modern DBMSes are capable of making them insertable or updatable (e.g. SQL Server INSTEAD OF triggers). This comes at a slight cost to DML statements on the underlying tables/indexes but is generally the first option you should try because it is nearly impossible to screw up and costs almost nothing to maintain. Of course, not every query can be turned into an indexed view - aggregate queries are the most troublesome. Which leads us to the next item... Create denormalized aggregate tables that are automatically updated by triggers. These tables exist in addition to the normalized tables and form a kind of CQRS model. Another CQRS model, more popular these days, is to use pub/sub to update the query models, which gives the benefit of asynchrony, although that may not be suitable in very rare instances where the data cannot be stale. Sometimes, indexed views are not possible, the transaction rates and data volumes are too high to admit triggers with acceptable performance, and the queries must always return realtime data. These situations are rare - I'd hazard a guess that they might apply to things like High-Frequency Trading or law enforcement/intelligence databases - but they can exist. In these cases you really have no option but to denormalize the original tables. 3. In what situations is it harmful or unnecessary to normalize in the first place? There are, in fact, several good examples here: If the database is being used only for reporting/analysis. Typically this implies that there is an additional , normalized database being used for OLTP, which is periodically synchronized to the analysis database through ETL or messaging. When enforcing a normalized model would require an unnecessarily complex analysis of the incoming data. An example of this is might be a system that needs to store phone numbers that are collected from several external systems or database. You could denormalize the call code and area code, but you'd have to account for all of the different possible formats, invalid phone numbers, vanity numbers (1-800-GET-STUFF), not to mention different locales. It's usually more trouble than it's worth, and phone numbers are usually just shoved into a single field unless you have a specific business need for the area code on its own. When the relational database is primarily there to provide transactional support for an additional, non-relational database. For example, you might be using the relational database as a message queue, or to track the status of a transaction or saga, when the primary data is being stored in Redis or MongoDB or whatever. In other words, the data is "control data". There's usually no point in normalizing data that isn't actually business data . Service-Oriented Architectures that share a physical database. This is a bit of an odd one, but in a true SOA, you will occasionally need to have data physically duplicated because services are not allowed to directly query each other's data. If they happen to be sharing the same physical database, the data will appear not to be normalized - but generally, the data owned by each individual service is still normalized unless one of the other mitigating factors is in place. For example, a Billing service might own the Bill entity, but the Accounting service needs to receive and store the Bill Date and Amount in order to include it in the revenue for that year. I'm sure there are more reasons that I haven't listed; what I'm getting at, in essence, is that they are quite specific and will be fairly obvious when they come up in practice. OLAP databases are supposed to use star schemas, SOAs are supposed to have some duplication, etc. If you're working with a well-known architecture model that simply doesn't work with normalization, then you don't normalize; generally speaking, the architecture model takes precedence over the data model. And to answer the very last question: Is it true that good architects and experts choose a denormalized design while non-experienced developers choose the opposite? What are the arguments against starting your design with normalization in mind? No, that is complete and utter B.S. It's also B.S. that experts always choose a normalized design. Experts don't just follow a mantra. They research, analyze, discuss, clarify, and iterate, and then they choose whatever approach makes the most sense for their particular situation. The 3NF or BCNF database is usually a good starting point for analysis because it's been tried and proven successful in tens of thousands of projects all over the world, but then again, so has C. That doesn't mean we automatically use C in every new project. Real-world situations may require some modifications to the model or the use of a different model altogether. You don't know until you're in that situation. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/212822",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/102854/"
]
} |
212,834 | User Story captures what the user wants to do with the system at a high level. I understand that the user story would further drive a number low level requirements. Is user story same as high level requirement for the system? | To be honest, after spending close to two years immersed in Agile development, I still think "user story" is just a fancy term for "functional requirement". It's different at a superficial level, e.g. it always takes a certain form ( "as an X, I want Y so that Z..." ), but the key elements - identifying the stakeholder and the rationale - are also inherent in well-written functional requirements. It's just as easy to write a bad user story as it is to write a bad requirement ( "as [our company name], I want [vague feature] so that I can [do something that's self-evidently part of my job, like 'sell more to customers']" ). What user stories almost never capture, in my experience, are non-functional requirements like performance and security. These kinds of requirements are very difficult to write properly and the format of the user story simply isn't very good for capturing them, because they're more about general product quality and mitigating (but not eliminating) risks rather than meeting a specific user's need. So, I really think of user stories as a subset of requirements, with a specific formula, and still use the terms pretty much interchangeably. The one major advantage user stories do have over requirements is that the word "requirement" suggests that a feature is required where it is often just desired . User stories can in theory be prioritized and slotted in for any release, whereas requirements appear to be a prerequisite for every release. Of course, for the aforementioned distinction to matter, your customers and/or senior management must embrace it; it does you no good whatsoever if you have 30 user stories all grouped into a "project" that must all be completed at the same time. You might as well call them "requirements" in that case because they are in fact required. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/212834",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/81149/"
]
} |
212,916 | In his famous The Free Lunch Is Over article from 2005, Herb Sutter predicted a Concurrent Programming Revolution as big as Object-Oriented Revolution. Has this revolution really happend in years 2005 - 2013? Key points in the article: Processor manufacturers have run out of room with most of their traditional approaches to boosting CPU performance. Instead of driving clock speeds ever higher, they are instead turning to hyperthreading and multicore architectures. Applications will increasingly need to be concurrent if they want to fully exploit CPU throughput gains. “Oh, performance doesn’t matter so much, computers just keep getting faster” statement will be wrong. Efficiency and performance optimization will get more, not less, important. Those languages that already lend themselves to heavy optimization will find new life; those that don’t will need to find ways to compete and become more efficient and optimizable. Expect long-term increased demand for performance-oriented languages and systems. Programming languages and systems will increasingly be forced to deal well with concurrency. We desperately need a higher-level programming model for concurrency than languages offer today. | Yes, but it depends. You can’t expect to write nontrivial , high-performance software without both taking advantage of parallel hardware and using concurrency as a program structuring technique. But most software is both trivial and non–performance-critical. A web app isn’t doing much number crunching, and CRUD apps have nothing like the hard timing limits of some simulation and medical software. Game developers in particular need to care about this, because games are the most common type of application with soft realtime requirements. The problem is salient on a mobile phone, where you want to squeeze as much computing and rendering power as possible out of an integrated chip with two CPU cores and a low-power GPU. That’s another reason that so many developers are looking at Haskell and waiting for languages like Rust to mature—we want safety and performance on modern hardware. Since 2005 we have gained new and improved tools such as OpenCL, CUDA, OpenMP, and vector instruction sets for working with concurrency and data parallelism in established languages. However, the relative newcomers are designed from early on to do many more interesting things with concurrency. Haskell’s concurrent runtime allows the language to provide rich support for lightweight parallelism (sparks) and concurrency abstractions (threads, channels, and shared mutable references). Go and Rust also offer lightweight tasks, Go using channels and Rust using message passing. These systems offer memory safety, performant runtimes, and static protection against certain kinds of races. The by-default immutability of Haskell and Rust make concurrency much easier for humans to manage. Erlang was doing this already in the 80s, but the needs of software and our knowledge about how to design programming systems have also improved since—thank goodness. Finally, many existing languages—I won’t name names—are ready to decline as credible choices for writing new software. Their burdens of complexity and poor concurrency abstractions make them unsuitable for the considerations of modern applications. We are simply waiting for mature alternatives. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/212916",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/102859/"
]
} |
212,995 | I have a function that takes in a set of parameters, then applies to them as conditions to an SQL query. However, while I favored a single argument array containing the conditions themselves: function searchQuery($params = array()) {
foreach($params as $param => $value) {
switch ($param) {
case 'name':
$query->where('name', $value);
break;
case 'phone':
$query->join('phone');
$query->where('phone', $value);
break;
}
}
} My colleague preferred listing all the arguments explicitly instead: function searchQuery($name = '', $phone = '') {
if ($name) {
$query->where('name', $value);
}
if ($phone) {
$query->join('phone');
$query->where('phone', $value);
}
} His argument was that by listing the arguments explicitly, the behavior of the function becomes more apparent - as opposed to having to delve into the code to find out what the mysterious argument $param was. My problem was that this gets very verbose when dealing with a lot of arguments, like 10+. Is there any preferred practice? My worst-case scenario would be seeing something like the following: searchQuery('', '', '', '', '', '', '', '', '', '', '', '', 'search_query') | IMHO your colleague is correct for the above example. Your preference might be terse, but its also less readable and therefore less maintainable. Ask the question why bother writing the function in the first place, what does your function 'bring to the table'- I have to understand what it does and how it does it, in great detail, just to use it. With his example, even though I am not a PHP programmer, I can see enough detail in the function declaration that I do not have to concern myself with its implementation. As far as a larger number of arguments, that is normally considered a code smell. Typically the function is trying to do too much? If you do find a real need for a large number of arguments, it is likely they are related in some way and belong together in one or a few structures or classes (maybe even array of related items such as lines in an address). However, passing an unstructured array does nothing to address the code smells. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/212995",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/103666/"
]
} |
213,259 | I see a lot of source code that uses PImpl idiom in C++. I assume Its purpose is to hide the private data/type/implementation, so it can remove dependence, and then reduce compile time and header include issue. But interface/pure-abstract classes in C++ also have this capability, they can also be used to hide data/type/implementation. And to let the caller just see the interface when creating an object, we can declare a factory method in the interface's header. The comparison is: Cost : The interface way cost is lower, because you don't even need to repeat the public wrapper function implementation void Bar::doWork() { return m_impl->doWork(); } , you just need to define the signature in the interface. Well understood : The interface technology is better understood by every C++ developer. Performance : Interface way performance is not worse than PImpl idiom, both requires an extra memory access. I assume the performance is same. Following is the pseudocode to illustrate my question: // Forward declaration can help you avoid include BarImpl header, and those included in BarImpl header.
class BarImpl;
class Bar
{
public:
// public functions
void doWork();
private:
// You don't need to compile Bar.cpp after changing the implementation in BarImpl.cpp
BarImpl* m_impl;
}; The same purpose can be implemented using interface: // Bar.h
class IBar
{
public:
virtual ~IBar(){}
// public functions
virtual void doWork() = 0;
};
// to only expose the interface instead of class name to caller
IBar* createObject(); So what's the point of PImpl? | First, PImpl is usually used for non-polymorphic classes. And when a polymorphic class has PImpl, it usually remains polymorphic, that is still implements interfaces and overrides virtual methods from base class and so on. So simpler implementation of PImpl is not interface, it is a simple class directly containing the members! There are three reasons to use PImpl: Making the binary interface (ABI) independent of the private members. It is possible to update a shared library without recompiling the dependent code, but only as long as the binary interface remains the same. Now almost any change in header, except for adding a non-member function and adding a non-virtual member function, changes the ABI. The PImpl idiom moves definition of the private members into the source and thus decouples the ABI from their definition. See Fragile Binary Interface Problem When a header changes, all sources including it have to be recompiled. And C++ compilation is rather slow. So by moving definitions of the private members into the source, the PImpl idiom reduces the compilation time, as fewer dependencies need to be pulled in the header, and reduces the compilation time after modifications even more as the dependents don't need to be recompiled (ok, this applies to interface+factory function with hidden concrete class too). For many classes in C++ exception safety is an important property. Often you need to compose several classes in one so that if during operation on more than one member throws, none of the members is modified or you have operation that will leave the member in inconsistent state if it throws and you need the containing object to remain consistent. In such case you implement the operation by creating new instance of the PImpl and swap them when the operation succeeds. Actually interface can also be used for implementation hiding only, but has following disadvantages: Adding non-virtual method does not break ABI, but adding a virtual one does. Interfaces therefore don't allow adding methods at all, PImpl does. Inteface can only be used via pointer/reference, so the user has to take care of proper resource management. On the other hand classes using PImpl are still value types and handle the resources internally. Hidden implementation can't be inherited, class with PImpl can. And of course interface won't help with exception safety. You need the indirection inside the class for that. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/213259",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/100543/"
]
} |
213,309 | Perhaps there's a name for what I want, but I'm not aware of it. I need something similar to a LinkedHashMap in Java, but where it returns the 'previous' value if there's no value at the specified key. That is, I have a list of objects stored by an integer key (which is in units of time in my case): ; key->value
10->A
15->B
20->C So, if I were to query for a value for key 0-9, it would return null . The special part is if I queried for something 10 <= i <= 14 it would return A. Or, for i >= 20, it would return C. Is there a data structure for this? | You are looking for a NavigableMap . This is a subtype of SortedMap that also has some functions available besides the nature of the map being sorted. Note that the Navigable map "is intended to supersede the SortedMap interface." ( Java SE 6 Collections Framework Enhancements ). Everything that currently implements SortedMap implements NavigableMap and this is likely to remain true. In particular, the method floorKey(K key) which "returns the greatest key less than or equal to the given key, or null if there is no such key. This is just one of many methods that allow you to get specific keys or submaps of the map. ceiling / floor (the entry that is higher / lower than the parameter) access of keys or map in descending order head / tail (the entries less / greater than a given key) higher / lower (the next key that is higher or lower than parameter) submap (given two keys, return the map that is between the two keys) Java has two implementations of the NavigableMap - the TreeMap and the ConcurrentSkipListMap . If you look at the idea/implementation of a skip list you will see why it would work really well with such a structure and its queries. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/213309",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/55468/"
]
} |
213,317 | When planning the architecture for a mid-large scale MVC web application how do you implement the layers to be as decoupled as possible and easy to test? (basically follow best practices) Let's say I'm using code first as my data access. I struggle with what to define "business logic" as, and how it is meant to interact with the data layer. Taking a vehicle sales application as an example, would business logic be classes that performed tasks such as calculating the tax band for given vehicles, comparing mile per gallon statistics etc? As for the business entities (e.g. Cars, Vans, Motorcycles) I would put these in the data layer along with my DataContext class. Also what would constitute application logic as opposed to business - I'm guessing things like session / user input validations? So for example, a car controller might return an action/view result that lists the top ten cars filtered by type and best mpg. So let's say I have an ICarRepository 'carRepo' injected into my controller(using the repository pattern / DI), I filter my cars from an action method parameter e.g. var cars = carRepo.getCarsByType("hatchback"); So I've kept the data access knowledge out of my controller using a repository, now to keep the business logic out of the controller using a domain model - var result = new MpgCalculator(cars); - Let's say I need the calculator class because it needs to perform additional logic to calculate the best fuel efficiency, more than just loading / filtering entities from the DB. So now I have a data set for my view to render that used a repository to retrieve from the data access layer, and domain specific object to process and perform business related tasks on that data. Am I making mistakes here? do we still need to use the repository pattern or can I just code against an interface to decouple the ORM and test? On this topic, as my concrete data access classe(s) dbcontext are in the data layer, should the interface definitions go into the domain/business layer meaning that if the data access technology is ever changed, my other layers aren't effected? From what I have studied thus far my structure looks like this: MVC Internet Application -> The standard internet project - models in here are ViewModels Domain/Business layer -> business specific classes/models that controllers can use to process domain entities from the data layer before passing on to the relevant views Repository abstraction necessary? -> I hear lots of debate on this, especially when using an ORM Data layer -> Entity classes (Car, Van, Motorcycle), DbContext - Concrete data access technology layer | You've got a lot of moving parts in your question, touching on a lot of concepts, but here's my basic advice when it comes to how to think about a mid-to-large scale MVC application: Presentation <---> Business Logic <---> Data Access Firstly, it's best to not think of the the app as "an MVC application". It's an application that uses the MVC pattern as its presentation component. Thinking about it this way will help you separate out your business logic concerns from your presentation concerns. Perhaps it's ok for small applications to pile everything down to database access into the MVC structure, but it'll quickly become untenable for a mid-to-large application. MVC (Presentation) In your app, the ASP.NET MVC component should deal with transforming business data for display purposes (Models), displaying the user interface (Views), and communication issues such as routing, authentication, authorization, request validation, response handling, and the like (Controllers). If you have code that does something else, then it doesn't belong in the MVC component . Repository/ORM (Data Access) Also in your app, the data access layer should be concerned with retrieving and storing persistent data. Commonly that's in the form of a relational database, but there are many other ways data can be persisted. If you have code that isn't reading or storing persistent data, then it doesn't belong in the data layer . I shared my thoughts on the ORM/Repository discussion previously on SO, but to recap, I don't consider an ORM to be the same thing as a Repository, for several reasons. Business Logic So now you have your presentation layer (MVC), and your data layer (repository or ORM) ... Everything else is your business logic layer (BLL). All of your code that decides which data to retrieve, or performs complicated calculations, or makes business decisions, should be here. I usually organize my business logic in the form of 'services', which my presentation layer can call upon to do the work requested. All of my domain models exist here. Your Approach This is where your approach breaks down a little for me. You describe your MVC controller as the place you would get data from the repository, and call upon the MPGCalculator to do some work, etc. I would not have my controller do any of this, but instead would delegate all of this off to a service in the BLL. In other words, I would not inject a repository and MPGCalculator into the controller, that's giving the controller too much responsibility (it's already handling all of the controller stuff I mentioned above). Instead, I would have a service in the BLL handle all of that, and pass the results back to the controller. The controller can then transform the results to the correct model, and pass that on to the correct view. The controller doesn't have any business logic in it, and the only things injected into the controller would be the appropriate BLL services. Doing it this way means your business logic (for example, given a set of vehicles, calculate the MPG and sort best to worst ) is independent from presentation and persistence concerns. It'll usually be in a library that doesn't know or care about the data persistence strategy nor the presentation strategy. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/213317",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/102541/"
]
} |
213,403 | I've been working in a relatively complex application with 10's of database tables (Aggregates, Entities/Value Objects) and applying DDD. At this point it appears to be basically DDD-Lite meaning that there are Application/Domain Services, the Domain Model (Entities, Value Objects), and Repositories. I picked up a book Implementing DDD and the first thing he is mentioning is DDD-Lite and Bounded Contexts and Domain Events missing as first mistakes which are usual when beginning DDD. Currently I've tried organizing the Domain Model by Aggregate relationships, and using namespaces to demonstrate it. I'm failing to see the benefit/downfalls relating to separating the Domain Model project into separate Bounded contexts (yet). Perhaps it will become apparent later on but I'd like some real life feedback on Bounded Contexts (and possibly sub domains etc. if they tie into it). | Consider a company that has a few different departments: Software Development HR Accounting Can you come up with a user model that can expressively represent all those areas of business? Think of what the User entity could look like in each one. Perhaps it's split into three different entities: Developer Employee Payee The effort to instantiate a user in each context is considerably different. Perhaps it's something like this: new Employee(ssn, name, joindate, dateofbirth, gender) new Developer(Employee, workstation, credentials) new Payee(Employee, role) excuse the example, it's hard to illustrate accurately without a proper domain model to reference If you used a naive implementation and used a single user entity, it would end up being an anaemic data model full of getters and setters, because you couldn't fully represent the user all over the place. There are clear boundaries in the business, so it's useful to model them that way. A user logging in versus a user in a payroll system versus a user playing a game are all very different, even if they are part of the same grand system. Thinking in another way - you can now create your developer management code to be very lightweight and independent from the rest of your system. It can use more accurate types with less baggage to worry about. It's the step to building smaller subsystems that may eventually be extracted out into its own application. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/213403",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/92210/"
]
} |
213,440 | There's a classic article named On the Criteria To Be Used in Decomposing Systems into Modules that I just read for the first time. It makes perfect sense to me, and is probably one of those articles that OOP was based on. Its conclusion: We have tried to demonstrate by these examples that
it is almost always incorrect to begin the decomposition
of a system into modules on the basis of a flowchart.
... Each module is then designed to hide
such a decision from the others In my uneducated and inexperienced opinion, functional programming takes the exact opposite advice of this article. My understanding is functional programming makes data flow idiomatic. Data gets passed from function to function, each function being intimately aware of the data and "changing it" along the way. And I think I've seen a Rich Hickey talk where he talks about how data hiding is overrated or unnecessary or something, but I can't remember for sure. First I want to know if my assessment is correct. Does the FP paradigm and this article philosophically disagree? Assuming they disagree, how does FP "compensate" for its lack of data hiding? Perhaps they sacrifice data hiding but gain X, Y and Z. I'd like to know the reasoning for why X, Y and Z are considered more beneficial than data hiding. Or, assuming they disagree, perhaps FP feels that data hiding is bad. If so, why does it think data hiding is bad? Assuming they agree, I'd like to know what FPs implementation of data hiding is. It's obvious to see this in OOP. You can have a private field that nobody outside the class can access. There's no obvious analogy of this to me in FP. I feel there are other questions I should be asking but I don't know I should be asking. Feel free to answer those, too. Update I found this Neal Ford talk that has a very relevant slide in it. I'll embed the screenshot here: | The article you mention is about modularity in general, and it would apply equally to structured, functional, and object-oriented programs. I have heard of that article before from someone who was a big OOP guy, but I read it as an article about programming in general, not something OOP specific. There is a famous article about functional programming, Why Functional Programming Matters , and the first sentence of the conclusion states "In this paper, we’ve argued that modularity is the key to successful programming." So the answer to (1) is no. Well designed functions don't assume more about their data than they need to, so the part about "intimately aware of the data" is wrong. (Or at least as wrong as it would be of OOP. You can't program strictly at a high level of abstraction and ignore all details forever in any paradigm. In the end, some part of the program does actually need to know about the specific details of the data.) Data hiding is an OOP specific term, and it isn't exactly the same as the information hiding discussed in the article. Information hiding in the article is about design decisions that were hard to make or are likely to change. Not every design decision about a data format is hard or likely to change, and not every decision that is hard or likely to change is about a data format. Personally, I can't see why OO programmers want everything to be an object. Sometimes, a simple data structure is all you need. Edit:
I found a relevant quote from an interview with Rich Hickey . Fogus: Following that idea—some people are surprised by the fact that Clojure does not engage in data-hiding encapsulation on its types. Why did you decide to forgo data-hiding? Hickey: Let’s be clear that Clojure strongly emphasizes programming to abstractions. At some point though, someone is going to need to have access to the data. And if you have a notion of “private”, you need corresponding notions of privilege and trust. And that adds a whole ton of complexity and little value, creates rigidity in a system, and often forces things to live in places they shouldn’t. This is in addition to the other losing that occurs when simple information is put into classes. To the extent the data is immutable, there is little harm that can come of providing access, other than that someone could come to depend upon something that might change. Well, okay, people do that all the time in real life, and when things change, they adapt. And if they are rational, they know when they make a decision based upon something that can change that they might in the future need to adapt. So, it’s a risk management decision, one I think programmers should be free to make.
If people don’t have the sensibilities to desire to program to abstractions and to be wary of marrying implementation details, then they are never going to be good programmers. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/213440",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/56945/"
]
} |
213,486 | Lately, I've been noticing that a lot of software, be it a website, a client application, or a video game, often write a representation of quantity as follows: "1 result(s)". Now, I can understand why they would do that 20 years ago. But these days, shouldn't we have enough processing power and memory to be able to say "1 result" and "2 results"? Is there some sort of special reason why it's still done this way? Now before you tell me to Google it, I would. But the thing is, I have no idea what search terms to use. So even some suggested search terms would be welcome. | Have you considered localization? It may look simple to write something like: var text = (count > 0) ? "items" : "item" But it's nowhere near that simple when you have to work in multiple languages. Here's an example, just using Google Translate: Language | Singular | Plural
----------------+---------------------------+-----------------
English | 1 item found | 2 items found
French | 1 article trouvé | 2 objets trouvés
Spanish | 1 artículo encontrado | 2 artículos encontrados
German | 1 Artikel gefunden | 2 Artikel gefunden
Hebrew | מצא פריט 1 | 2 פריטים נמצאו
Arabic | وجدت بند 1 | 2 أصناف تم العثور عليها
Korean | 1 개 항목 발견 | 2 항목 발견 Now, granted, Google Translate may not be doing a perfect job here. But isn't that partly the point? This isn't all that simple. You can't just add an "s" and be done with it. Not in production code, anyway. It's way simpler to just use the plural form with parentheses - you only have one resource to localize that way. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/213486",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/104032/"
]
} |
213,577 | In the debate of Rich vs. Anemic domain models, the internet is full of philosophical advice but short on authoritative examples. The objective of this question is to find definitive guidelines and concrete examples of proper Domain-Driven Design models. (Ideally in C#.) For a real-world example, this implementation of DDD seems to be wrong: The WorkItem domain models below are nothing but property bags, used by Entity Framework for a code-first database. Per Fowler, it is anemic . The WorkItemService layer is apparently a common misperception of Domain Services; it contains all of the behavior / business logic for the WorkItem. Per Yemelyanov and others, it is procedural . (pg. 6) So if the below is wrong, how can I make it right? The behavior, i.e. AddStatusUpdate or Checkout , should belong in the WorkItem class correct? What dependencies should the WorkItem model have? public class WorkItemService : IWorkItemService {
private IUnitOfWorkFactory _unitOfWorkFactory;
//using Unity for dependency injection
public WorkItemService(IUnitOfWorkFactory unitOfWorkFactory) {
_unitOfWorkFactory = unitOfWorkFactory;
}
public void AddStatusUpdate(int workItemId, int statusId) {
using (var unitOfWork = _unitOfWorkFactory.GetUnitOfWork<IWorkItemUnitOfWork>()) {
var workItemRepo = unitOfWork.WorkItemRepository;
var workItemStatusRepo = unitOfWork.WorkItemStatusRepository;
var workItem = workItemRepo.Read(wi => wi.Id == workItemId).FirstOrDefault();
if (workItem == null)
throw new ArgumentException(string.Format(@"The provided WorkItem Id '{0}' is not recognized", workItemId), "workItemId");
var status = workItemStatusRepo.Read(s => s.Id == statusId).FirstOrDefault();
if (status == null)
throw new ArgumentException(string.Format(@"The provided Status Id '{0}' is not recognized", statusId), "statusId");
workItem.StatusHistory.Add(status);
workItemRepo.Update(workItem);
unitOfWork.Save();
}
}
} (This example was simplified to be more readable. The code is definitely still clunky, because it's a confused attempt, but the domain behavior was: update status by adding the new status to the archive history. Ultimately I agree with the other answers, this could just be handled by CRUD.) Update @AlexeyZimarev gave the best answer, a perfect video on the subject in C# by Jimmy Bogard, but it was apparently moved into a comment below because it didn't give enough information beyond the link. I have a rough draft of my notes summarizing the video in my answer below. Please feel free to comment on the answer with any corrections. The video is an hour long but very worth watching. Update - 2 Years Later I think it's a sign of DDD's nascent maturity that even after studying it for 2 years, I still can't promise that I know the "right way" of doing it. Ubiquitous language, aggregate roots, and its approach to behavior-driven design are DDD's valuable contributions to the industry. Persistence ignorance and event sourcing causes confusion, and I think philosophy like that holds it back from wider adoption. But if I had to do this code over again, with what I've learned, I think it would look something like this: I still welcome any answers to this (very active) post that provide any best-practices code for a valid domain model. | The most helpful answer was given by Alexey Zimarev and got at least 7 upvotes before a moderator moved it into a comment below my original question.... His answer: I would recommend you to watch Jimmy Bogard's NDC 2012 session "Crafting Wicked Domain Models" on Vimeo. He explains what rich domain should be and how to implement them in real life by having behaviour in your entities. Examples are very practical and all in C#. http://vimeo.com/43598193 I took some notes to summarize the video for both my team's benefit and to provide a little more immediate detail in this post. (The video is an hour long, but really worth every minute if you have time. Jimmy Bogard deserves a lot of credit for his explanation.) "For most applications... we don't know that they're going to be complex when we start. They just become that way." Complexity grows naturally as code and requirements are added. Applications can start out very simple, as CRUD, but behavior/rules can become baked in. "The nice thing is we don't have to start out complex. We can start with the anemic domain model, that's just property bags, and with just standard refactoring techniques we can move towards a true domain model." Domain models = business objects. Domain behavior = business rules. Behavior is often hidden in an application -- it can be in PageLoad, Button1_Click, or often in helper classes like 'FooManager' or 'FooService'. Business rules that are separate from domain objects "require us to remember" those rules. In my personal example above, one business rule is WorkItem.StatusHistory.Add(). We're not just changing the status, we're archiving it for auditing. Domain behaviors "eliminate bugs in an application a lot more easily than just writing a bunch of tests." Tests require you to know to write those tests. The domain behaviors offer you the right paths to test . Domain services are "helper classes to coordinate activities between different domain model entities." Domain services != domain behavior. Entities have behavior, domain services are just intermediaries between the entities. Domain objects shouldn't have possession of the infrastructure they need (i.e. IOfferCalculatorService). The infrastructure service should be passed in to the domain model that uses it. Domain models should offer to tell you what they can do, and they should only be able to do those things. The properties of domain models should be guarded with private setters, so that only the model can set its own properties, through its own behaviors . Otherwise it's "promiscuous." Anemic domain model objects, that are just property bags for an ORM, are only "a thin veneer -- a strongly typed version over the database." "However easy it is to get a database row into an object, that's what we've got." 'Most persistant object models are just that. What differentiates an anemic domain model versus an application that doesn't really have behavior, is if an object has business rules, but those rules are not found in a domain model. ' "For a lot of applications, there's no real need to build any kind of real business application logic layer, it's just something that can talk to the database and perhaps some easy way to represent the data that's in there." So in other words, if all you're doing is CRUD with no special business objects or behavior rules, you don't need DDD. Please feel free to comment with any other points that you feel should be included, or if you think any of these notes are off the mark. Tried to quote directly or paraphrase as much as possible. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/213577",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/104135/"
]
} |
213,708 | This has been troubling me for some time, and I'd really appreciate the input of other professionals. Short background: I started programming when my parents bought me my first computer in 1988 (at age 14, I'm 39 now). I followed a couple of other career paths before finally becoming a professional programmer in 1997. Late bloomer, perhaps, but that's how it was. I'm still happy with my choice, I love programming, and I consider myself good at what I do. Lately, I've been noticing that the more experience I gain, the longer it takes me to complete projects, or certain tasks in a project. I'm not going senile yet. It's just that I've seen so many different ways in which things can go wrong. And the potential pitfalls and gotchas that I know about and remember are just getting more and more. Trivial example: it used to be just "okay, write a file here". Now I'm worrying about permissions, locking, concurrency, atomic operations, indirection/frameworks, different file systems, number of files in a directory, predictable temp file names, the quality of randomness in my PRNG, power shortages in the middle of any operation, an understandable API for what I'm doing, proper documentation, etc etc etc. In short, the problems have long since moved from "how do I do this" to "what's the best/safest way of doing it". The upshot is that it takes me longer to finish a project than a novice. My version may be rock solid, and as impenetrable as I know how to make it, but it takes longer. The "create file" example above was just that, an example. Real tasks are obviously more complex, but less suited for a generic question like this one. I hope you understand where I'm going with this. I have no problem coming up with efficient algorithms, I love math, I enjoy complex subjects, I have no difficulties with concentration. I think I do have a problem with experience, and consequently with a fear of errors (intrinsic or extrinsic). I spend almost two hours a day reading up on new developments, new techniques, languages, platforms, security vulnerabilities, and so on. The conundrum is that the more knowledge I gain, the slower I am in completing projects. How do you deal with this? | You're not any slower in completing projects. Previously, you thought your novice projects were done when they really were not. You should sell this quality to clients. "This company might get it done faster and cheaper, but is it really done? Or will you be hunting bugs for years?" Beyond that, you need to know and accept the old idiom: "Perfect is the enemy of good." | {
"source": [
"https://softwareengineering.stackexchange.com/questions/213708",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/104238/"
]
} |
213,763 | I work on a team that does frequent code reviews. But it seems like more of a formality than anything. No one really points out problems in the code for fear of offending other developers. The few times I've tried to ask for changes were met with very defensive and reluctant attitudes. This is of course not good. Not only are we spending the time to code review, but we're getting literally zero value from it. Is this an issue that needs to be addressed by individual developers, or are there techniques for suggesting changes without stepping on other people's toes? | This seems to be a pretty common prevailing attitude among some developers. Everyone seems to feel that a code review is some challenge to their work, and that makes no sense to me. A code review is a quality assurance mechanism that has the added bonus of education to go along with it. We implement code reviews extensively where I work, and I've fostered within my own team the attitude that the code reviews are a collaboration mechanism more than a quality process. The only way to begin coding as a team is to see each other's work and to question it. That's how best practices are formed. Dialog is the key. I've sent code back to developers for silly reasons such as formatting, agreed upon best practices, and spelling. I code review with a very fine edge, and I expect my own code to withstand the same scrutiny. I've used tactics such as checking in code that would not pass my own code review for the sole purpose of getting a junior developer to challenge me . Tell me it doesn't meet the requirements. Stand up and have an opinion. Some guidelines I think everyone should really have when engaged in a code reviewing environment: Get over yourself. You're not perfect, and just because a junior developer managed to see you do something silly in a for loop isn't the end of the world. If you're so great, invite criticism and prove it. Expect code reviews to reveal new ideas on how to do things, new habits, and, most of all, constructive dialog. Keep an open mind when reviewing someone else's code. Be receptive to their ideas, and make sure that changes you suggest are suggested for a reason. It is assumed that the checked-in code builds, but that doesn't mean that it follows best practices. Make sure that anything you bring up can be backed up with a cited reference. If you say it doesn't follow best practices then cite the standards document and section. If you say it isn't a "performant" method, then have a link to a document that shows why and possibly provides metrics. Make suggestions useful, and explain why you're suggesting something. You will occasionally find a problem with code that is self explanatory, but most of the time this person has coded something based on habits. Explain why this habit should be altered and the value of altering it (unless you're explaining it for the 5th time, in which you have a personnel problem). When your code is reviewed, consider everything the reviewer is suggesting objectively. If you're pushing back, ask yourself honestly if you're just being defensive or if you really believe you have a case. If you have a case, continue the dialog. Don't get argumentative, bring ammunition such as facts and metrics. Whether you're a reviewer or a reviewee, use code reviews as an opportunity to educate. Whether it's educating yourself or the person you're reviewing, if there is a discrepancy then there is a chance to learn somewhere. Make good use of it. Ask questions and be ready to have your questions answered truthfully. I recently made a statement that was "pseudo-true". It wasn't wrong, but it definitely wasn't right. A junior developer challenged me on it, and I disagreed. My response was "I haven't seen that behavior, but if you can find me a document on it I would love to read it.". I spent about an hour that afternoon reading the document he sent me, and now I have a much better response (re: educated) when confronted with a similar situation. Given the education slant that I put code reviews in, I often make my responses about questions. If I find something blatantly wonky, I will instruct the developer to correct it. Otherwise I will ask the questions. "Why did you use method A to achieve goal B?" "What gain does declaring a variable have in the instance of its usage in method Z?" "I see you have copied/pasted some code, did you consider refactoring? You didn't refactor, what was the reasoning behind that choice?" The code doesn't progress until the reviewer approves it, and the reviewer won't approve it until the questions are answered. When framed in an inquisitive way that indicates you don't understand the developers reasoning it becomes less confrontational and takes on more of an instructional vibe. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/213763",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/66745/"
]
} |
213,891 | I'm really confused when I see a lot of in-memory database implementation used for testing, because I also heard a lot from integration testing best practices that the environment running the test should resemble as closely as possible the production environment, including operating system, library, database engine, etc. What am I missing here? | In a typical software development situation, tests are used at two points: during development, and before moving the product along the development chain. The first situation, running tests during development, serves short-term goals: defining tasks (as in TDD: write a failing test, then make it pass), preventing regressions, making sure your changes don't break anything else, etc. Such tests need to be extremely fast: ideally, your whole test suite runs in less than 5 seconds, and you can just run it in a loop next to your IDE or text editor while you code. Any regression you introduce will pop up within seconds. Speedy test runs are more important in this phase than catching 100% of regressions and bugs, and since it is impractical (or outright impossible) to develop on exact copies of the production systems, the effort required to achieve perfect testing here isn't worth it. Using in-memory databases is a trade-off: they are not exact copies of the production system, but they do help keep test runs below the 5-second limit; if the choice is between a slightly different database setup for my database-related testing, and no testing at all, I know what I pick. The second situation, moving the code along the development chain, however, does require extensive testing. Since we can (and should) automate this part of the development process, we can afford much slower tests - even if a full test run takes hours, scheduling a nightly build still means we always have an accurate picture of yesterday's codebase. Simulating the production environment as accurately as possible is important now, but we can afford it. So we don't make the in-memory-database tradeoff: we install the exact same version of the exact same DBMS as the production systems, and if possible, we fill it with actual production data before the testing begins. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/213891",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/103872/"
]
} |
213,898 | I work for a software product company. We have large enterprise customers who implement our product and we provide support to them. For example, if there is a defect, we provide patches, etc. In other words, It is a fairly typical setup. Recently, a ticket was issued and assigned to me regarding an exception found by a customer in a log file that has to do with concurrent database access in a clustered implementation of our product. So this customer's specific configuration may well be critical in the occurrence of this bug. All we got from the customer was their log file. The approach I proposed to my team was to attempt to reproduce the bug in a configuration setup similar to that of the customer and get a comparable log. However, they disagree with my approach saying that I don't need to reproduce the bug as it's overly time-consuming and will require simulating a server cluster on VMs. My team suggests I simply "follow the code" to see where the thread- and/or transaction-unsafe code is and put in the change working off of a simple local development, which is not a cluster implementation like the environment from which the occurrence of the bug originates. To me, working out of an abstract blueprint (program code) rather than a tangible, visible manifestation (runtime reproduction) seems difficult, so I wanted to ask a general question: Is it reasonable to insist on reproducing every defect and debug it before diagnosing and fixing it? Or: If I am a senior developer, should I be able to read multithreaded code and create a mental picture of what it does in all use case scenarios rather than require to run the application, test different use case scenarios hands-on, and step through the code line by line? Or am I a poor developer for demanding that kind of work environment? Is debugging for sissies? In my opinion, any fix submitted in response to an incident ticket should be tested in an environment simulated to be as close to the original environment as possible. How else can you know that it will really remedy the issue? It is like releasing a new model of a vehicle without crash testing it with a dummy to demonstrate that the air bags indeed work. Last but not least, if you agree with me: How should I talk with my team to convince them that my approach is reasonable, conservative and more bulletproof? | Is it reasonable to insist on reproducing every defect and debug it before diagnosing and fixing it? You should give it your best effort. I know that sometimes there are conditions and environments that are so complex they can't be reproduced exactly , but you should certainly try if you can. If you never reproduced the bug and saw it for yourself, how can you be 100% certain that you really fixed it? Maybe your proposed fix introduces some other subtle bug that won't manifest unless you actually try to reproduce the original defect. If I am a senior developer, should I be able to read (multithreaded) code and create a mental picture of what it does in all use case scenarios rather than require to run the application, test different use case scenarios hands on, and step through the code line by line? Or am I a poor developer for demanding that kind of work environment? Is debugging for sissies? I would not trust someone who runs the code "in their head", if that's their only approach. It's a good place to start . Reproducing the bug, and fixing it and then demonstrating that the solution prevents the bug from reoccurring - that is where it should end . How should I talk with my team to convince them that my approach is reasonable, conservative and more bulletproof? Because if they never reproduced the bug, they can't know for certain that it is fixed. And if the customer comes back and complains that the bug is still there, that is not a Good Thing. After all, they are paying you big $$$ (I assume) to deal with this problem. If you fail to fix the problem properly, you've broken faith with the customer (to some degree) and if there are competitors in your market, they may not remain your customer. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/213898",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/66237/"
]
} |
213,985 | Should a developer close a ticket in a bug tracking tool? Or should only the scrum master or testers should do it? What are the best practices? | Like everyone else has mentioned, it depends on the company or house rules of the team but according Joel's post: Painless Bug Tracking Bugs should be closed by the issuer of the bug to make sure that the bug is indeed fixed.
So in your case, should a developer close the ticket? Yes, if the developer is the one who raised the bug ticket. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/213985",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/104508/"
]
} |
214,059 | I often hear that a real programmer can easily learn any language within a week. Languages are just tools for getting things done, I'm told. Programming is the ultimate skill that must be learned and mastered. How can I make sure that I'm actually learning how to program rather than simply learning the details of a language? And how can I develop programming skills that can be applied towards all languages instead of just one? | Don't worry about meeting some ridiculous concept of "skill" so commonly heard in such statements like: All programming languages are basically the same. Once you pick up one language well you can pick up any other language quickly and easily. Languages are just tools, there's some overarching brain-magic that actually makes the software. These statements are all based on a flawed premise and betray a lack of experience across a broader spectrum of programming languages. They are very common statements and strongly believed by a great swath of programmers, I won't dispute that, but I will dispute their accuracy. This is proved simply: Spend one week (or really any amount of time greater than a couple days) trying to learn the fundamentals of Haskell , Prolog , or Agda . You will soon after start hearing the old Sesame Street song play in your head "One of these things is not like the others...". As it turns out, there is a whole swath of programming languages, techniques, and approaches which are so foreign from what 95% of us do or have ever done. Many are completely unaware that any of these other concepts even exist, which is fine and these concepts aren't necessary to be an employed and even effective programmer. But the fact remains: These techniques and approaches do exist, they are good for many different things and can be very useful, but they are not just like what you're used to and people cannot simply pick them up with an afternoon of fiddling. Furthermore, I would say the majority of cases where people claim they have or can learn such complex things as programming languages so quickly as a week, they are suffering from a bit of Dunning Kruger Effect , Wikipedia (emphasis mine): The Dunning–Kruger effect is a cognitive bias in which unskilled
individuals suffer from illusory superiority, mistakenly rating their
ability much higher than average. This bias is attributed to a
metacognitive inability of the unskilled to recognize their
mistakes. I would refer people to this more experienced perview on the concept of learning to program by Peter Norvig: Learn to program in ten years . Researchers (Bloom (1985), Bryan & Harter (1899), Hayes (1989), Simmon
& Chase (1973)) have shown it takes about ten years to develop
expertise in any of a wide variety of areas, including chess playing,
music composition, telegraph operation, painting, piano playing,
swimming, tennis, and research in neuropsychology and topology. The
key is deliberative practice: not just doing it again and again, but
challenging yourself with a task that is just beyond your current
ability, trying it, analyzing your performance while and after doing
it, and correcting any mistakes. Then repeat. And repeat again. Surely, there is a set of overarching principles that will make all languages easy to learn! Perhaps, but I would argue this set of principles is so large that there will almost always be languages outside of your one-week reach. As you add new concepts to the list you're familiar and comfortable with, this list of languages outside your immediate reach may shrink, but I have a hard time believing it will ever go away. The list of conceptual computing approaches to things is so broad it's baffling, from concatenative languages to vector based languages to languages specializing in AI or metaprogramming ( or languages which exist entirely to support regular expressions ). After ten years you will be able to generally program. This means you can write somewhat decent code in some language or style of languages. So after 10 years you are ready to start tackling these countless broad cross-cutting concepts for the rest of your life, and short of being Edsger W. Dijkstra , Donald Knuth or John D. Carmack , you're not going to get to all of them. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/214059",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/80843/"
]
} |
214,154 | I'm trying to test a class which calls some Hadoop web services. The code is pretty much of the form: method() {
...use Jersey client to create WebResource...
...make request...
...do something with response...
} e.g. there is a create directory method, a create folder method etc. Given that the code is dealing with an external web service that I don't have control over, how can I unit test this? I could try and mock the web service client/responses but that breaks the guideline I've seen a lot recently: "Don't mock objects you don't own". I could set up a dummy web service implementation - would that still constitute a "unit test" or would it then be an integration test? Is it just not possible to unit test at this low a level - how would a TDD practitioner go about this? | In my opinion you should mock the webservice calls if this is a unit test, as opposed to an integration test. Your unit test should not test whether the external webservice is working, or whether your integration with it is correct. Without getting too dogmatic about TDD, note that a side effect of turning your unit test into an integration test is that it's likely to run slower, and you want fast unit tests. Also, if the webservice is temporarily down or working incorrectly, should this cause your unit test to fail? It doesn't seem right. Your unit test should fail for only one reason: if there is a bug in the code in that "unit". The only portion of code that is relevant here is ...do something with response... . Mock the rest. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/214154",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/86765/"
]
} |
214,158 | There is the ongoing argument of free trial versus a freemium model (that is, a free-for-life version of their software with restricted and/or stripped down features) for allowing potential customers and users to test run their product. Upon my research, I can conclude that the free trial is the way to go on both for the benefit of the user experience of the individual using the software and for the benefit of the vendor in both aspect of sales and maximizing usage. There are many factors for a free trial software that can greatly maximize user usage like the length of the free trial. One keyword that reoccurs on my research for "freemium" is "frustrating". Many individuals chose to uninstall the software instead of having to use a piece of software where some features were unavailable to them. At the same time, these users never had the chance to use the "paid" features. Unbeknownst to them, and hidden by the very own vendors who are selling the software, they don't know and cannot know what benefits the Pro features will bring. Without first having to use them, a user will not know they have the feeling of "needing" something. Which brings me onto my next point of a free trial model. Some opinions of a free trial user is "I cannot imagine using this software without the Pro features." This goes back to the point of "the user not knowing they need something until they first understanding the feeling of have." Those that have had 14 days to use a the "full" version features said they cannot imagine not having or using the features provided there. So when fourteen days were over, they were more likely to dish out money than someone who's never experienced the full features. The length of the free trial is also an important factor is creating a lasting impression on users. In an experiment conducted by Visual Website Optimizer, they noticed that for a 14 day free trial versus a 30 day free trial, while the number of sign ups and installs were the same, the usage for the 14 day trial increased 102%. This, of course, in turn increased their revenue as well. Another very important point to mention is that "offering a useful and fully functional free version of the product" is VERY IMPORTANT. Fully functional free trials are effective in getting media coverage, and this publicity for new software and/or software vendors are fairly crucial. One other relevant aspect is the importance for users to give feedback. Consider, in the fully functional time-limited free trial, the ability for users to give feedback. One other feature important for our software is the need for telemetric data, that is, quantitative and comprehensive data on how a user uses our software. Some of usage statistics may fall into a legal grey area, as laws are different depending on the location in the United States, and the world. One way to combat this legal issue is to have an opt-in feature for gathering anonymous usage statistics. An opt-in feature would mean giving the user an option to turn off statistics gathering and at the same time, the user must be very well aware of what the gathering of anonymous usage information does. It is important to make it CLEAR to the user what data will be collected, what "we" will be doing with it, and make it easy to turn off any time, including allowing them to change their mind for turning it on or off. For more detailed statistics, like tracking individual activities of users, it could lead to legal issues. The Eclipse IDE logs detailed usage statistics, but it does it by the full consent of the user. We may have to potentially prepare a consent form with our legal team. The Eclipse Usage Information Collection collects this information:
1. Plug-ins that are started by the system.
2. Commands accessed via the keyboard shortcuts and actions invoked via menus or toolbars.
3. When the "view" of the editor is given focus.
4. System information like the version of the software being used, the operating system being used.
5. Description of internal errors. Kill Switch A kill switch for our software can be managed logging the initial data, encrypting it with a salt, and whenever it's an invalid date, that is, the user tried to change it, it would disable the software. Another option is to have internet authentication on install, log that date to a central web database, and check the date every time the application is opened. On disabling the software, we can delete vital DLLs. The option of having to pay to generate a report cannot be considered. I am interested in implementing a free trial version to my existing software. I plan on having the trial last 14 days. Upon the 14th day, my software would prompt the user to either pay for the paid version, or have the consequence of not being able to use it. The free trial version is entirely unlocked, meaning all paid features are there. However, my dilemma is about the "best" way to implement what to do for an end-of-trial solution. Do I delete vital DLLs? Have a user authentication system upon installation or use? Encrypt the initial time and date of use with a salt, and if it's an invalid date (AKA they try to change their initial date), disable the software? I am interested in knowing what are some effective measures of disabling software. | There are two issues here - one is a programming problem, and one is a business problem. For the second one, asking programmers about business analysis is about as good advice as you can get from your local bus driver; which is to say it may be good, or terrible, but you aren't asking experts so assign no inherent weight to any of it. (As an aside, one of my bus drivers often gives me good ideas.) But as for a programming problem, the downside is that it seems most people don't like the idea to start with. There's lots of good reasons for this, but they aren't really important here. Sadly, we are also in some ways a terrible group of people to ask, because by virtue of our position and knowledge most of us are pretty darn good at pirating software to the point that we think it pointless to try to implement trial measures at all! The fact is, your core issue is a business analysis and sales experiment problem, and there is no answer save from getting to know your customers and experimentation. It doesn't sound like you already have a big pool of customers to talk with about their product sales pipeline and how you can implement trial software to improve it, so one must understand that we are now in a position of stabbing blindly in the dark. The first step to working in a dark room is know you are in one! So, start somewhere. The goal is not the right answer, because I can only 100% guarantee one thing - you aren't going to start with the right answer. But start we must. Give Your Customers a KISS Also, "keep it simple, stupid." Or, from a programming methodology, "try the simplest thing that could possibly work." Then go from there. So, when installing get a date. Store it. Get your data on installs and usage, and bask in the warmth that data analysis can bring. Disable whatever part of the software you want when the trial expires - I suggest if (trial_expired()) . Then adjust. First of all, get good data. Are you expecting to provide updates in the future for your software? Then if someone tries to beat v1.0, don't worry about it. The easiest thing is uninstall the software, then reinstall it. New trial period (because your software deleted that old date). Do you care? If v1.0 is soon to be updated, then I strongly suggest you get a way to find out if this happens - but don't try to stop them (yet). It's like people who take 2 mints instead of 1 - it's just a mint, let it go. They probably weren't going to pay you for it (yet), anyway. How? Well, in Windows this was usually done by shoving random orphaned keys in the registry and weirdly named files in various common installation directories. Your installer would explicitly pretend they didn't exist and just leave them there. The level of computer sophistication this requires to defeat is vastly more than uninstalling. But if you don't try to stop someone from reinstalling, then you get data you are going to need. They aren't robbing your store, here - they are giving you a valuable opportunity to study a potential customer. Use it to tailor your marketing, software help prompts, email campaigns, 'special offers'. I'd try detecting the event and then, a day later, sending them a key they could enter into the program to extend their trial until next month. You might find a method that turns them into paying customers, or not; no way of knowing ahead of time! This seems trivial, but lets face it - do you really need anything more than that? If you are collecting data, that's worth money to you. You'll probably want to spend time on making sure emails are good, designing in-software messaging that you can update to guide new users (invisible sales pitches, really), rewriting your app blurb, hammering out bugs that would prevent any sane person from buying your software in the first place, etc. But the key here is you are communicating honestly and clearly to all potential customers - if you want this, you should pay us. Just because you are being (invisibly) magnanimous doesn't mean this is freeware - you are just going to be smart about it and not drive away customers because they haven't sealed the deal in the first two weeks. If you think about it, these people are coming in to your store to really look over your product. They are taking a test drive, yet it costs you (nearly) nothing. No successful store got that way by driving out customers who weren't yet ready to buy! Yet we all know if you can get something for free, forever, why pay? So use the best of both worlds. With each step you take you will be working a filter, and if your software is good you WILL be making conversions with every step. My simple suggestion: 1) 14 day free trial 2) Extend the free trial, no questions asked 3) Offer to extend the free trial again, if they'd be so kind to fill out a short form that communicates to you their opinions of your software so far. 4) Are you sure these people who are still actually using your software won't consider buying it? Find a way to entice them - or at least try to squeeze more info out of them on what you could do to get others to buy. Maybe allow them to 'request' an extension through a form, which your sales staff will kindly grant regardless and then use the info to see if they can deal with any objections they might have to buying your software. What if this user is evaluating it for use in their entire department? If the budget meeting is next month, do you really want them to be unable to use the software until then? 5) Maybe you cut them off now...and maybe you invite them for a special offer a week later at a discount. Maybe not. Maybe you offer to extend to give a final extension. These people don't owe you money, so treat them as potential future customers - not thieves, freeloaders, or people who need to pay their bill or get their service turned off. Everyone hates bill collectors, so don't act like one. Programmatically, let's face it - this isn't actually hard. Use orphaned files to keep track of dates. If you have to have a valid account like through iTunes, this tracking is way easier. If it isn't required, then I would generally suggest you not require it for a first install - never turn people away from the first version because they don't want to fill out a stupid form. People hate forms! And they don't know your software, so why 'pay' to fill out a form if they don't even know if your app works? On step 2, I'd get a signup/email to extend. Again, programming it is trivial. 3) After 30 days I'd probably require some kind of 'phone home' behavior. If people are going to crack that - like they do - congratu-friggin-lations! You must be getting pretty popular. But don't worry, if this isn't a video game people probably don't care enough to bother, so just change your code in the next version and make them go back to the drawing board. TL;DR; #1 Don't solve a problem you don't really have (you aren't Adobe, yet - DRM can be simple and effective if you don't over-think it).
#2 Don't ask a programmer when you need to ask a marketer, business analyst, or salesman.
#3 Treat your customers as people who might give you a living, not as people who owe you money, and especially not as thieves.
#4 DRM is really, really, easy. Just don't think you are going to stop people from using your software who will almost certainly never, ever, ever give you money...at least, not today. I have bought software I've pirated, years later. So don't piss off future customers needlessly, either. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/214158",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/104659/"
]
} |
214,239 | I'm interested in developing a large-scale user-facing website that is written in Java. As for design, I'm thinking of developing independent, modular services that can act as data providers to my main web application. As for writing these modular services (data providers), I can leverage an existing framework like Spring and develop these services following the RESTful design pattern, and expose resources via HTTP with a message format like JSON...or I can leverage an existing network framework like Netty ( http://netty.io/ ) and serialization format like Protobufs ( https://developers.google.com/protocol-buffers/docs/overview ) and develop a TCP server that sends back and forth the serialized protobuf payload. When should you choose one over the other? Would there be any benefit of using a serialization format like Protobufs and sending stream of bytes over the wire? Would there be overhead in just using JSON? How much overhead is there between using TCP/IP and using HTTP? When should you use Spring over Netty, and vice versa to build such a service? | There are definitely pros/cons about using JSON over REST vs. straight up TCP/IP with binary protocol and I think you are already suspecting that binary protocol will be faster. I can't tell you exactly how much faster (and this would depend on a lot of factors), but I would guess maybe 1-2 orders of magnitude difference. At first glance if something is 10-100 times slower than something else, you might have a knee-jerk reaction and go for "fast thing". However, this speed difference is only in the protocol itself. If there's database/file access on the server side, that won't get impacted by your choice of the transfer layer. In some cases, it might make your transfer layer speed much less significant. HTTP REST and JSON are good for a number of reasons: they are easily consumable by just about anyone. You can write your Web App, then turn around and publish your API for the rest of the world to use. Now anyone can hit the same end-points and get to your services they are easily debuggable, you can open a packet sniffer or simply dump incoming requests to text files and see what's going on. You can't do that with binary protocols they are easily extendable. You can add more attributes and data at a later time and not break compatibility with old clients. consumable by javascript clients (not sure they have protobuf JS parser yet, don't believe there's one) Protobufs over TCP/IP: they are faster If it was my choice, I would hands down go with HTTP REST and JSON. There's a reason that so many other companies and websites went that route. Also keep in mind that in the future you could always support 2 end points. If your design is correct, your end-point choice should be completely decoupled from your server-side business logic or the database. So if you realize later on that you need more speed for all/some requests, you should be able to add protobufs with minimal fuss. Right off the bat however, REST/JSON will get you off the ground faster and get you further. As far as Netty vs Spring goes. I haven't used Netty directly, but I believe it is just a light-weight web server where as Spring is a framework that provides a lot more for you than just that. It has data access layers, background job scheduling and (I think) an MVC model, so it is much more heavyweight. Which one to choose? If you decided to go HTTP way, then next question is probably how standard is your app? If you are about to write some crazy custom logic that doesn't fit the standard mold and all you need is just a HTTP server layer, go with Netty. However, I'm suspecting you app isn't that special and it could probably benefit from a lot of things that Spring has to offer. But that means that you should structure your app around Spring's framework and do things the way they expect you to do, which would mean learning more about Spring before diving into your product. Frameworks in general are great because again they get you off the ground faster, but the downside is that you have to fit into their mold instead of doing your own design and then expect the framework to just work. (*) - in the past it was pointed out that my posts do not reflect opinions of the entire world, so I'll go on the record and just add that I have limited experience with either Netty (I've used Play framework before which is based on Netty) or Spring (I've only read about it). So take what I say with a grain of salt. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/214239",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/93933/"
]
} |
214,437 | Whenever I come across a new API or programming language or even simple Linux man pages , I always (ever since I remember) avoided them and instead lazily relied on examples for gaining understanding of new concepts. Subconsciously, I avoid documentation/APIs whenever it is not straightforward or cryptic or just plain boring. It's been years since I began programming and now I feel like I need to mend my ways as I now realize that I'm causing more damage by refraining from reading cryptic/difficult documentation as it is still a million times better than examples as the official documentation has more coverage than any example out there. Even after realizing that examples should be treated as "added" value instead of the "primary" source for learning. How do I break this bad habit as a programmer or am I overthinking? | The habit of relying in preference on examples has nothing wrong: for you, it's just the fastest way to get your answer. Moreover, examples are visual. It's easier to parse visually an example rather than read paragraphs of text and extract the information you need. Example: In order to list the products, one should use Index action of the Products controller, given that GET is the only possible verb here (see [Affecting products] for more information about the actions used to create, modify and delete the products from the database). In order to obtain detailed information about a specific product, append its unique identifier to the end of the URI. If you want to get the list of every product available, don't append anything. You may also use filters, as described in the [REST filters for selecting data] section of the manual. Note that the list of products is limited to one thousand items. [Pagination] can be used to walk through the entire list, given that each page is still limited to one thousand items. You may also want to force the service to refresh the quantities in stock. This is done by setting the refresh-quantities to one. is detailed, but boring and barely readable. The fact that you need to follow links makes things even worse. If we append some samples, it becomes much easier to understand: GET Products/Index/ GET Products/Index/12345/ GET Products/Index/?skip=100&take=20 GET Products/Index/?category=12 GET Products/Index/?price=0..39.90 GET Products/Index/?category=12&skip=100&take=20 The fact that you use only the examples may be a problem. Don't plainly stop using the examples, but remember that once you got the idea, a more verbose documentation may help. For example, the sample above doesn't show that the list of products is limited to 1 000: you have to read the documentation for that. When do you know that you should read the documentation? Every time the API or the library is not behaving as you expected. For example, you grab the sample and do: GET Products/Index/?skip=6000&take=3000 For some reason, it returns less than 3 000 items, while you have over twenty thousand products in your database. Here, the API is not behaving like you expected, so it's a good time to read the detailed documentation. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/214437",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/104978/"
]
} |
214,601 | This is a bit of an open ended question but I wanted some opinions, as I grew up in a world where inline SQL scripts were the norm, then we were all made very aware of SQL injection based issues, and how fragile the sql was when doing string manipulations all over the place. Then came the dawn of the ORM where you were explaining the query to the ORM and letting it generate its own SQL, which in a lot of cases was not optimal but was safe and easy. Another good thing about ORMs or database abstraction layers were that the SQL was generated with its database engine in mind, so I could use Hibernate/Nhibernate with MSSQL, MYSQL and my code never changed it was just a configuration detail. Now fast forward to current day, where Micro ORMs seem to be winning over more developers I was wondering why we have seemingly taken a U-Turn on the whole in-line sql subject. I must admit I do like the idea of no ORM config files and being able to write my query in a more optimal manner but it feels like I am opening myself back up to the old vulnerabilities such as SQL injection and I am also tying myself to one database engine so if I want my software to support multiple database engines I would need to do some more string hackery which seems to then start to make code unreadable and more fragile. (Just before someone mentions it I know you can use parameter based arguments with most micro orms which offers protection in most cases from sql injection) So what are peoples opinions on this sort of thing? I am using Dapper as my Micro ORM in this instance and NHibernate as my regular ORM in this scenario, however most in each field are quite similar. What I term as inline sql is SQL strings within source code. There used to be design debates over SQL strings in source code detracting from the fundamental intent of the logic, which is why statically typed linq style queries became so popular its still just 1 language, but with lets say C# and Sql in one page you have 2 languages intermingled in your raw source code now. Just to clarify, the SQL injection is just one of the known issues with using sql strings, I already mention you can stop this from happening with parameter based queries, however I highlight other issues with having SQL queries ingrained in your source code, such as the lack of DB Vendor abstraction as well as losing any level of compile time error capturing on string based queries, these are all issues which we managed to side step with the dawn of ORMs with their higher level querying functionality, such as HQL or LINQ etc (not all of the issues but most of them). So I am less focused on the individual highlighted issues and more the bigger picture of is it now becoming more acceptable to have SQL strings directly in your source code again, as most Micro ORMs use this mechanism. Here is a similar question which has a few different view points, although is more about the inline sql without the micro orm context: https://stackoverflow.com/questions/5303746/is-inline-sql-hard-coding | What you are describing as "Inline SQL" should really be called "string concatenation without parameterization," and you don't have to do that to use a Micro ORM safely. Consider this Dapper example: string sql = "SELECT * from user_profile WHERE FirstName LIKE @name;";
var result = connection.Query<Profile>(sql, new {name = "%"+name+"%"}); It's fully parameterized, even though string concatenation is taking place. See the @ sign? Or this example: var dog = connection.Query<Dog>("select Age = @Age, Id = @Id",
new { Age = (int?)null, Id = guid }); which is roughly equivalent to the following ADO.NET code: List<Dog> dog = new List<Dog>();
using(var cmd = connection.CreateCommand()) {
cmd.CommandText = "select Age = @Age, Id = @Id";
cmd.Parameters.AddWithValue("Age", DBNull.Value);
cmd.Parameters.AddWithValue("Id", guid);
using(var reader = cmd.ExecuteReader()) {
while(reader.Read()) {
int age = reader.ReadInt32("Age");
int id = reader.ReadInt32("Id");
dog.Add(new Dog { Age = age, Id = id });
}
}
} If you need more flexibility than this, Dapper provides SQL Templates and an AddDynamicParms() function. All SQL Injection safe. So why use SQL strings in the first place? Well, for the same reasons you would use custom SQL in any other ORM. Maybe the ORM is code-generating sub-optimal SQL, and you need to optimize it. Maybe you want to do something that is difficult to do in the ORM natively, like UNIONs. Or, maybe you simply want to avoid the complexity of generating all those proxy classes. If you really don't want to write a SQL string for every CRUD method in Dapper (who does?), you can use this library: https://github.com/ericdc1/Dapper.SimpleCRUD/ That will get you extremely simple and straightforward CRUD, while still giving you the flexibility of hand-written SQL statements. Remember, the ADO.NET example above was the way everyone did it before ORM's came along; Dapper is just a thin veneer over that. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/214601",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/41512/"
]
} |
214,629 | Watching a CEO for a new "cloud computing" company describe his company on a finance TV program today, he said something like "Cloud computing is superior to old-fashioned client-server computing". Now I'm confused. Can someone please explain what "cloud computing" means in contrast to client-server? As far as I understand it, cloud computing is more of a network services model, such that I do not own or maintain the physical hardware. The "cloud" is all the back-end stuff. But I still might have an application that communicates with that "cloud" environment. And if I run a web site presents a form that a user fills out, pushes a button on the page, and returns some report that was generated by the web server, isn't that the same as "cloud" computing? And would you not consider my web browser as the "client"? Please note my question is specific to the concept of "cloud computing" with respect to "client-server". Sorry if this is an inappropriate question for this site; it's the one closest in the Stack universe and this is my first time here. I'm an old timer, programming since mainframe days in the late 70's. | Strictly speaking, there is no 'Cloud'. Not in the sense of what that CEO was spouting. There's an Internet, of course. There's hosted services. There's VPS's. There's content delivery systems. We've (technical folks) have adapted to the term to reference certain hosted service models. But 'Cloud' in consumer media is largely a marketing term loosely translated as 'internet'. More often than not, it also means 'I get to charge you by the month'. You are correct in your thoughts that the two terms, 'cloud' and 'client-server' aren't related. Having a service hosted 'in the cloud' (I always want to add a dramatic 'dun-dun-daaaaaaa' after using that phrase) does not make a client-server app any less client-server-y. For example, the 'web' primarily uses a client-server model. The web browser is the client. The web server is the server. That a web server is hosted 'in the cloud' does not change the fact that the web browser / web server relationship is client-server. So the term client-server defines the relationship between two entities in a system. Where the entities are physically hosted is irrelevant. Basically, you are correct. The two are not comparable. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/214629",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/105170/"
]
} |
214,639 | I started working at a company that is primarily C# oriented. We have a few people who like Java and JRuby, but a majority of programmers here like C#. I was hired because I have a lot of experience building web applications and because I lean towards newer technologies like JRuby on Rails or nodejs. I have recently started on a project building a web application with a focus on getting a lot of stuff done in a short amount of time. The software lead has dictated that I use mvc4 instead of rails. That might be OK, except I don't know mvc4, I don't know C# and I am the only one responsible for creating the web application server and front-end UI. Wouldn't it make sense to use a framework that I already know extremely well (Rails) instead of using mvc4? The reasoning behind the decision was that the tech lead doesn't know Jruby/rails and there would be no way to reuse the code. Counter arguments: He won't be contributing to the code and is, frankly, not needed on this project. So, it doesn't really matter if he knows JRuby/rails or not. We actually can reuse the code since we have a lot of java apps that
JRuby can pull code from and vice-versa. In fact, he has dedicated
some resources to convert a Java library to C#, instead of just
running the Java library on the JRuby on Rails app. All because he
doesn't like Java or JRuby I have built many web applications, but using something unfamiliar is causing some spin-up and I am unable to build an awesome application in as short of a time as I'm used to. This would be fine; learning new technologies is important in this field. The problem is, for this project we need to get a lot done fast. At what point should a developer be allowed to choose his tools? Is this dependent on the company? Does my company suck or is this considered normal? Do greener pastures exist? Am I looking at this the wrong way? | I'd say you have to talk to the team lead and say something like: I know you guys are a .NET shop, but I was actually hired for my Java/JRubyRails skills. I can build this new application in X amount of time using those tools that I already know. I could learn C#/mvc4 like you want, but it will take >> X amount of time. What do you want? This raises the issue of "skills-you-were-(assumedly)-hired-for" vs. "skills-you-need-now" and also shows that you're willing to learn the new skills, but that it will take longer to develop the new application as you are new to this tool-set. And you do want to show that you're willing to learn new skills. Not being open to learning new skills is a good way to ensure your employment ends when your skills are no longer needed. As to your question at the end: At what point should a developer be allowed to choose his tools? Is this dependent on the company? Does my company suck or is this considered normal? Do greener pastures exist? Am I looking at this the wrong way? It usually depends on the company. If a company buys MS tools and standardizes everything on the VisualStudio platform and .NET framework, it could get very awkward if one developer insists on using Linux and C. That is normal. Exceptions might exist where the company is less fussy about the editors, such as letting developers choose Vi vs. Emacs, as long as the output is the same. I know some companies even let developers choose Windows vs. Linux, but the language they work in has very good support and runtimes for both OSs. Why do companies do this? Consistency is one reason. It can be very difficult to debug things when the application is a patchwork of binaries built in the favourite languages/frameworks of various developers, built in different tools, and tested on very different systems. If all developers work on mostly similar set ups, those sorts of problems are resolved. In your case, it sounds like you were hired to work in technology that is non-standard in this company. This seems strange to me, and you might want to talk to the person who hired you about why they wanted that. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/214639",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/105175/"
]
} |
214,721 | I am reading a book called Rails AntiPatterns and they talk about using delegation to to avoid breaking the Law of Demeter. Here is their prime example: They believe that calling something like this in the controller is bad (and I agree) @street = @invoice.customer.address.street Their proposed solution is to do the following: class Customer
has_one :address
belongs_to :invoice
def street
address.street
end
end
class Invoice
has_one :customer
def customer_street
customer.street
end
end
@street = @invoice.customer_street They are stating that since you only use one dot, you are not breaking the Law of Demeter here. I think this is incorrect, because you are still going through customer to go through address to get the invoice's street. I primarily got this idea from a blog post I read: http://www.dan-manges.com/blog/37 In the blog post the prime example is class Wallet
attr_accessor :cash
end
class Customer
has_one :wallet
# attribute delegation
def cash
@wallet.cash
end
end
class Paperboy
def collect_money(customer, due_amount)
if customer.cash < due_ammount
raise InsufficientFundsError
else
customer.cash -= due_amount
@collected_amount += due_amount
end
end
end The blog post states that although there is only one dot customer.cash instead of customer.wallet.cash , this code still violates the Law of Demeter. Now in the Paperboy collect_money method, we don't have two dots, we
just have one in "customer.cash". Has this delegation solved our
problem? Not at all. If we look at the behavior, a paperboy is still
reaching directly into a customer's wallet to get cash out. EDIT I completely understand and agree that this is still a violation and I need to create a method in Wallet called withdraw that handles the payment for me and that I should call that method inside the Customer class. What I don't get is that according to this process, my first example still violates the Law of Demeter because Invoice is still reaching directly into Customer to get the street. Can somebody help me clear the confusion. I have been searching for the past 2 days trying to let this topic sink in, but it is still confusing. | Your first example does not violate the law of Demeter. Yes, with the code as it stands, saying @invoice.customer_street does happen to get the same value that a hypothetical @invoice.customer.address.street would, but at each step of the traversal, the value returned is decided by the object being asked - it's not that "the paperboy reaches into the customer's wallet", it's that "the paperboy asks the customer for cash, and the customer happens to get the cash from their wallet ". When you say @invoice.customer.address.street , you're assuming knowledge of customer and address internals - this is the bad thing. When you say @invoice.customer_street , you are asking the invoice , "hey, I'd like the customer's street, you decide how you get it ". The customer then says to its address, "hey I'd like your street, you decide how you get it ". The thrust of Demeter is not 'you cannot ever know values from objects far away in the graph from you"; it is instead 'you yourself are not to traverse far along the object graph in order to obtain values'. I agree this may seem like a subtle distinction, but consider this: in Demeter-compliant code, how much code needs to change when the internal representation of an address changes? What about in non-Demeter-compliant code? | {
"source": [
"https://softwareengineering.stackexchange.com/questions/214721",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/105101/"
]
} |
214,866 | I developed an offensive content checker for my website and want to publish it on GitHub . However, the source code contains many offensive, racist and otherwise nasty content. The source is fully documented, but I wanted your opinion on whether it's acceptable to publish such work on GitHub or whether to leave the array of strings up to the imagination of the reader?! | I have to disagree with the ROT-13 solution. Obfuscating your banned words simply because the sight of them might offend someone is a waste of time. Your dictionary of bad words/bad-word-rules should come from a separate file anyways (which could be loaded at runtime, or embedded as a resource) . Obfuscating this file simply makes it more difficult for you/other developers/your users to alter it, or fix any issues. Besides, if I saw a file called "banned_words.txt" on my hard-drive, I would expect it to contain a list of offensive words. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/214866",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/105380/"
]
} |
214,889 | I'm trying to understand what an event loop is. Often the explanation is that in an event loop, you do something until you're notified that an event has occurred. You then handle the event and continue doing what you were doing before. To map the above definition with an example. I have a server which 'listens' in a event loop, and when a socket connection is detected, the data from it gets read and displayed, after which the server resumes/starts listening as it did before. However, this event happening and us getting notified 'just like that' are to much for me to handle. You can say: "It's not 'just like that' you have to register an event listener". But what's an event listener but a function which for some reason isn't returning. Is it in it's own loop, waiting to be notified when an event happens? Should the event listener also register an event listener? Where does it end? Events are a nice abstraction to work with, however just an abstraction. I believe that in the end, polling is unavoidable. Perhaps we are not doing it in our code, but the lower levels (the programming language implementation or the OS) are doing it for us. It basically comes down to the following pseudo code which is running somewhere low enough so it doesn't result in busy waiting: while(True):
do stuff
check if event has happened (poll)
do other stuff This is my understanding of the whole idea, and I would like to hear if this is correct. I'm open in accepting that the whole idea is fundamentally wrong, in which case I would like the correct explanation. | Most event loops will suspend if there are no events ready, which means the operating system will not give the task any execution time until an event happens. Say the event is a key being pressed. You might ask if there's a loop somewhere in the operating system checking for keypresses. The answer is no. Keys being pressed generate an interrupt , which is handled asynchronously by the hardware. Likewise for timers, mouse movements, a packet arriving, etc. In fact, for most operating systems, polling for events is the abstraction. The hardware and OS handle events asynchronously and put them in a queue that can be polled by applications. You only really see true polling at the hardware level in embedded systems, and even there not always. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/214889",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/102293/"
]
} |
214,952 | I understand how GitHub works, but one thing I've been confused about is, why almost every OSS project lately has a "Fork me on GitHub" link on their homepage. For example, http://jqtjs.com/ , http://www.daviddurman.com/flexi-color-picker/ , and others. Why is this so common? Is it that they want/need code validation, checking for security/performance improvements that they may not know how to do? Is it meant to show that this is a collaborative project - you're welcome to add improvements? Do they work for GitHub, or want to promote their service? Oddly enough, I don't think I've seen a "Fork project on Bitbucket " logo recently. My first reaction to that logo was that the project probably needs to be modified (forked) in order to integrate it with anything useful - or that they are encouraging fragmented codebase, encouraging everyone to make their own fork of the project. But I don't think that is the intent. | Is it meant to show that this is a collaborative project - you're welcome to add improvements? Yes: you don't have the right to push a commit directly at their repo. But you do have the possibility to fork their repo , which makes it your repo, and push commit from there, preparing pull requests . | {
"source": [
"https://softwareengineering.stackexchange.com/questions/214952",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/105443/"
]
} |
214,968 | There is a growing industry now with more than 30 companies playing in the Backend-As-A-Service (BaaS) market. The principle is simple: give companies a secure way of exposing data housed on premises and behind the firewall publicly. This can include database data, as well as Legacy PC data through established connectors; SAP for example provides a connector for transacting with their legacy systems. Early attempts were fixed providers for specific systems like SAP, IBM or Oracle, but the new breed is extensible, allowing Channel Partners and Consultants to build robust integration applications that can consume whatever data sources the client wants to expose. I just happen to be close to finishing a Cloud Based HTML5 application platform that provides robust integration services, and I would like to break ground on an extensible data proxy to complete the system. From what I can gather, I need to provide either an installable web service of some kind, or a Cloud service which the client can configure with VPN for interactions. Then I can build in connectors, which can be activated with a service account, and expose those transactions via web services of some kind (JSON, SOAP, etc). I can also provide a framework that allows people to build in their own connectors, and use some kind of schema to hook those connectors into the proxy. The end result is some kind of public facing web service that could securely be consumed by applications to show data through HTML5 on any device. My gut is, this isn't as hard as it sounds. Almost all of the 30+ companies (With more popping up almost weekly) have all come into existence in the last 18 months or so, which tells me either the root technology, or the skillset to create the technology is in abundance right now. Where should I start on this? Are there some open source projects I can leverage? A specific group of developers I can hire? I'm confident someone here can set me on the right path and save me some time. You don't see this many companies spring up this rapidly if they are all starting from scratch with proprietary technology. The Register: WTF is BaaS One Minute Video from Kony on their BaaS | Is it meant to show that this is a collaborative project - you're welcome to add improvements? Yes: you don't have the right to push a commit directly at their repo. But you do have the possibility to fork their repo , which makes it your repo, and push commit from there, preparing pull requests . | {
"source": [
"https://softwareengineering.stackexchange.com/questions/214968",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/62802/"
]
} |
215,065 | Can anyone help me in understanding how float values are stored in the memory . My doubt is here float values contain ' .' (for example 3.45 ) how the '.' will be represented in the memory? Can anyone please clarify me with a diagram? | The decimal point is not explicitly stored anywhere; that's a display issue. The following explanation is a simplification; I'm leaving out a lot of important details and my examples aren't meant to represent any real-world platform. It should give you a flavor of how floating-point values are represented in memory and the issues associated with them, but you will want to find more authoritative sources like What Every Computer Scientist Should Know About Floating-Point Arithmetic . Start by representing a floating-point value in a variant of scientific notation, using base 2 instead of base 10. For example, the value 3.14159 can be represented as 0.7853975 * 2 2 0.7853975 is the significand , a.k.a. the mantissa; it's the part of the number containing the significant digits. This value is multiplied by the base 2 raised to the power of 2 to get 3.14159. Floating-point numbers are encoded by storing the significand and the exponent (along with a sign bit). A typical 32-bit layout looks something like the following: 3 32222222 22211111111110000000000
1 09876543 21098765432109876543210
+-+--------+-----------------------+
| | | |
+-+--------+-----------------------+
^ ^ ^
| | |
| | +-- significand
| |
| +------------------- exponent
|
+------------------------ sign bit Like signed integer types, the high-order bit indicates sign; 0 indicates a positive value, 1 indicates negative. The next 8 bits are used for the exponent. Exponents can be positive or negative, but instead of reserving another sign bit, they're encoded such that 10000000 represents 0, so 00000000 represents -128 and 11111111 represents 127. The remaining bits are used for the significand. Each bit represents a negative power of 2 counting from the left, so: 01101 = 0 * 2 -1 + 1 * 2 -2 + 1 * 2 -3 + 0 * 2 -4 + 1 * 2 -5 = 0.25 + 0.125 + 0.03125
= 0.40625 Some platforms assume a "hidden" leading bit in the significand that's always set to 1, so values in the significand are always between [0.5, 1). This allows these platforms to store values with a slightly greater precision (more on that below). My example doesn't do this. So our value of 3.14159 would be represented as something like 0 10000010 11001001000011111100111
^ ^ ^
| | |
| | +--- significand = 0.7853975...
| |
| +------------------- exponent = 2 (130 - 128)
|
+------------------------- sign = 0 (positive)
value= -1 (sign) * 2 (exponent) * (significand)
value= -1 0 * 2 2 * 0.7853975...
value= 3.14159... Now, something you'll notice if you add up all the bits in the significand is that they don't total 0.7853975; they actually come out to 0.78539747. There aren't quite enough bits to store the value exactly ; we can only store an approximation. The number of bits in the significand determines the precision , or how many significant digits you can store. 23 bits gives us roughly 6 decimal digits of precision. 64-bit floating point types offer enough bits in the significand to give roughly 12 to 15 digits of precision. But be aware that there are values that cannot be represented exactly no matter how many bits you use. Just as values like 1/3 cannot be represented in a finite number of decimal digits, values like 1/10 cannot be represented in a finite number of bits. Since values are approximate, calculations with them are also approximate, and rounding errors accumulate. The number of bits in the exponent determines the range (the minimum and maximum values you can represent). But as you move towards your minimum and maximum values, the size of the gap between representable values increases. That is, if you can't exactly represent values between 0.785397 and 0.785398, then you can't exactly represent values between 7.85397 and 7.85398 either, or values between 78.5397 and 78.5398, or values between 785397.0 and 785398.0. Be careful when multiplying very large (in terms of magnitude) numbers by very small numbers. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/215065",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/105543/"
]
} |
215,429 | I have this project which stores product details from amazon into the database. Just to give you an idea on how big it is: [
{
"title": "Genetic Engineering (Opposing Viewpoints)",
"short_title": "Genetic Engineering ...",
"brand": "",
"condition": "",
"sales_rank": "7171426",
"binding": "Book",
"item_detail_url": "http://localhost/wordpress/product/?asin=0737705124",
"node_list": "Books > Science & Math > Biological Sciences > Biotechnology",
"node_category": "Books",
"subcat": "",
"model_number": "",
"item_url": "http://localhost/wordpress/wp-content/ecom-plugin-redirects/ecom_redirector.php?id=128",
"details_url": "http://localhost/wordpress/product/?asin=0737705124",
"large_image": "http://localhost/wordpress/wp-content/plugins/ecom/img/large-notfound.png",
"medium_image": "http://localhost/wordpress/wp-content/plugins/ecom/img/medium-notfound.png",
"small_image": "http://localhost/wordpress/wp-content/plugins/ecom/img/small-notfound.png",
"thumbnail_image": "http://localhost/wordpress/wp-content/plugins/ecom/img/thumbnail-notfound.png",
"tiny_img": "http://localhost/wordpress/wp-content/plugins/ecom/img/tiny-notfound.png",
"swatch_img": "http://localhost/wordpress/wp-content/plugins/ecom/img/swatch-notfound.png",
"total_images": "6",
"amount": "33.70",
"currency": "$",
"long_currency": "USD",
"price": "$33.70",
"price_type": "List Price",
"show_price_type": "0",
"stars_url": "",
"product_review": "",
"rating": "",
"yellow_star_class": "",
"white_star_class": "",
"rating_text": " of 5",
"reviews_url": "",
"review_label": "",
"reviews_label": "Read all ",
"review_count": "",
"create_review_url": "http://localhost/wordpress/wp-content/ecom-plugin-redirects/ecom_redirector.php?id=132",
"create_review_label": "Write a review",
"buy_url": "http://localhost/wordpress/wp-content/ecom-plugin-redirects/ecom_redirector.php?id=19186",
"add_to_cart_action": "http://localhost/wordpress/wp-content/ecom-plugin-redirects/add_to_cart.php",
"asin": "0737705124",
"status": "Only 7 left in stock.",
"snippet_condition": "in_stock",
"status_class": "ninstck",
"customer_images": [
"http://localhost/wordpress/wp-content/uploads/2013/10/ecom_images/51M2vvFvs2BL.jpg",
"http://localhost/wordpress/wp-content/uploads/2013/10/ecom_images/31FIM-YIUrL.jpg",
"http://localhost/wordpress/wp-content/uploads/2013/10/ecom_images/51M2vvFvs2BL.jpg",
"http://localhost/wordpress/wp-content/uploads/2013/10/ecom_images/51M2vvFvs2BL.jpg"
],
"disclaimer": "",
"item_attributes": [
{
"attr": "Author",
"value": "Greenhaven Press"
},
{
"attr": "Binding",
"value": "Hardcover"
},
{
"attr": "EAN",
"value": "9780737705126"
},
{
"attr": "Edition",
"value": "1"
},
{
"attr": "ISBN",
"value": "0737705124"
},
{
"attr": "Label",
"value": "Greenhaven Press"
},
{
"attr": "Manufacturer",
"value": "Greenhaven Press"
},
{
"attr": "NumberOfItems",
"value": "1"
},
{
"attr": "NumberOfPages",
"value": "224"
},
{
"attr": "ProductGroup",
"value": "Book"
},
{
"attr": "ProductTypeName",
"value": "ABIS_BOOK"
},
{
"attr": "PublicationDate",
"value": "2000-06"
},
{
"attr": "Publisher",
"value": "Greenhaven Press"
},
{
"attr": "SKU",
"value": "G0737705124I2N00"
},
{
"attr": "Studio",
"value": "Greenhaven Press"
},
{
"attr": "Title",
"value": "Genetic Engineering (Opposing Viewpoints)"
}
],
"customer_review_url": "http://localhost/wordpress/wp-content/ecom-customer-reviews/0737705124.html",
"flickr_results": [
"http://localhost/wordpress/wp-content/uploads/2013/10/ecom_images/5105560852_06c7d06f14_m.jpg"
],
"freebase_text": "No around the web data available yet",
"freebase_image": "http://localhost/wordpress/wp-content/plugins/ecom/img/freebase-notfound.jpg",
"ebay_related_items": [
{
"title": "Genetic Engineering (Introducing Issues With Opposing Viewpoints), , Good Book",
"image": "http://localhost/wordpress/wp-content/uploads/2013/10/ecom_images/140.jpg",
"url": "http://localhost/wordpress/wp-content/ecom-plugin-redirects/ecom_redirector.php?id=12165",
"currency_id": "$",
"current_price": "26.2"
},
{
"title": "Genetic Engineering Opposing Viewpoints by DAVID BENDER - 1964 Hardcover",
"image": "http://localhost/wordpress/wp-content/uploads/2013/10/ecom_images/140.jpg",
"url": "http://localhost/wordpress/wp-content/ecom-plugin-redirects/ecom_redirector.php?id=130",
"currency_id": "AUD",
"current_price": "11.99"
}
],
"no_follow": "rel=\"nofollow\"",
"new_tab": "target=\"_blank\"",
"related_products": [],
"super_saver_shipping": "",
"shipping_availability": "",
"total_offers": "7",
"added_to_cart": ""
}
] So the structure for the table is: asin title details (the product details in json) Will the performance suffer if I have to store like 10,000 products? Is there any other way of doing this? I'm thinking of the following, but the current setup is really the most convenient one since I also have to use the data on the client side: store the product details in a file. So something like ASIN123.json store the product details in one big file. (I'm guessing it will be a drag to extract data from this file) store each of the fields in the details in its own table field Thanks in advance! UPDATE Thanks for the answers! I just want to add some more details to my question.
First, the records are updated for a specific interval. Only specific data such as the price or the title are updated. Second, I'm also using the json encoded data in the client-side so I thought at first it would be easier to just have it json encoded so I can easily use it in the client side without having to convert. Does this change your opinion about simply storing the fields in a regular table field in an RDBMS setup? | Size is not so much of an issue, the ability to query and maintain the data however is. If, for example, Greenhaven Press decides they want to change their name to Greenhaven Press International, you'll have to find the record, deserialize it, change it, serialize it, pump it back into the database. Consider this: does storing these objects as serialized data offer you a clear added value over storing it in a relational form? If the answer is no, then it might not be worth the hassle. UPDATE As far as your update of your question goes: I'm inclined to say no, it makes little or no difference. Whether you update one field or all of them in this json string is irrelevant because the whole process is identical. Don't forget that your requirements might change; even though you're using json on the client side now doesn't mean you'll need json in the future. Storing your data in a relational form guarantees technology-independence while preserving relationships, data constraints and queryable metadata: this is where the true value of a relational db lies. Discarding those advantages will neither give you a performance gain nor make your application more scalable or flexible. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/215429",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/25953/"
]
} |
215,482 | In JavaScript: The Good Parts by Douglas Crockford, he mentions in his inheritance chapter, The other benefit of classical inheritance is that it includes the specification of a system of types. This mostly frees the programmer from having to write explicit casting operations, which is a very good thing because when casting, the safety benefits of a type system are lost. So first of all, what actually is safety? protection against data corruption, or hackers, or system malfunctions, etc.? What are the safety benefits of a type system? What makes a type system different that allows it to provide these safety benefits? | Type systems prevent errors Type systems eliminates illegal programs. Consider the following Python code. a = 'foo'
b = True
c = a / b In Python, this program fails; it throws an exception. In a language like Java, C#, Haskell , whatever, this isn't even a legal program. You entirely avoid these errors because they simply aren't possible in the set of input programs. Similarly, a better type system rules out more errors. If we jump up to super advanced type systems we can say things like this: Definition divide x (y : {x : integer | x /= 0}) = x / y Now the type system guarantees that there aren't any divide-by-0 errors. What sort of errors Here's a brief list of what errors type systems can prevent Out-of-range errors SQL injection Generalizing 2, many safety issues (what taint checking is for in Perl ) Out-of-sequence errors (forgetting to call init) Forcing a subset of values to be used (for example, only integers greater than 0) Nefarious kittens (Yes, it was a joke) Loss-of-precision errors Software transactional memory (STM) errors (this needs purity, which also requires types) Generalizing 8, controlling side effects Invariants over data structures (is a binary tree balanced?) Forgetting an exception or throwing the wrong one And remember, this is also at compile time. No need to write tests with 100% code coverage to simply check for type errors, the compiler just does it for you :) Case study: Typed lambda calculus Alright, let's examine the simplest of all type systems, simply typed lambda calculus . Basically there are two types, Type = Unit | Type -> Type And all terms are either variables, lambdas, or application. Based on this, we can prove that any well typed program terminates. There is never a situation where the program will get stuck or loop forever. This isn't provable in normal lambda calculus because well, it isn't true. Think about this, we can use type systems to guarentee that our program doesn't loop forever, rather cool right? Detour into dynamic types Dynamic type systems can offer identical guarantees as static type systems, but at runtime rather than compile time. Actually, since it's runtime, you can actually offer more information. You lose some guarantees however, particularly about static properties like termination. So dynamic types don't rule out certain programs, but rather route malformed programs to well-defined actions, like throwing exceptions. TLDR So the long and the short of it, is that type systems rule out certain programs. Many of the programs are broken in some way, therefore, with type systems we avoid these broken programs. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/215482",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/105937/"
]
} |
215,523 | I've been a freelancer and a coder by night for a while, and recently, I've been hired after several levels of interviews in a nice NY company, even though I've some lacks in specific fields. Is this common for companies to hire seniors with less experience? Will they wait some weeks to respect a certain learning curve? I don't know anything about working in a company, so that's why I worry. After one week, I'm still checking and exploring sources, but after one week of work, it seems that some coworkers are considering that I'm slow. I'm good in maths, physics, algorithms, but still I need to learn about all the templates used in this company. Anyone here already received a less-experienced senior member in his team? Is this acceptable? I'm planing on having a meeting with my boss to stop worrying about that. Sounds like a good idea? [EDIT] Thanks for these answers. I'm definitely a -new - senior developer. I returned to the office with more confidence on Monday. I guess that it's normal to feel a bit incompetent in front of unknown templates/sources during the first weeks when you receive a good pay. | There is no commonly accepted definition of "senior developer". Definitions may exist within organizations but a senior developer usually represents someone: With software development experience (3-5 years minimum), Can work without constant supervision (often with no supervision), Familiar with the development environment and tools, Capable of supervising or teaching junior developers, Capable of designing and implementing small to medium sized projects. It is hard to talk about your specific situation but there is usually is a learning curve when joining a new team. No matter how standard the tools and processes they use, each team has a history of decisions that lead them to their current state. If the organization uses custom libraries or environments, my first question would be to ask about documentation and training . Big companies may have formal training for new employees, even senior ones. Read any existing designs, the build environment documentation, processes and so on. If these do not exist, offer to document them . I would then ask to pair with an existing senior developer . This is usually the fastest way to learn what is expected and how things work. How did they solve that problem? How much effort did they spend on unit tests and reviews? Why did they do it this way and not that way? Ensure the other developer helps you setup your development environment and walks you through the release process , too. Make it clear to them you know the language and tools, just not their techniques. For example, if you did things a different way previously and think it is better than their way, tentatively and respectfully suggest it. Hopefully, pairing with them will not slow them down. They may even appreciate another set of eyes to catch typos and issues before they are committed to source control. Lastly, realize you are not going to fully understand a large project within a week so start fixing small bugs or features . Make sure your buddy reviews them and you get any and all feedback. You will miss things. You will make mistakes. That's OK. Learn from them, do not repeat them and work hard. If you are good at what you do, you will get there. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/215523",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/105972/"
]
} |
215,562 | Our development shop would really like to do more agile projects but we have a problem getting clients on board. Many clients want a budget and a deadline. It's hard to sell a client on an agile project when our competitors do come up with waterfall-based fixed deadlines and fixed prices. We know their fixed numbers are bad, but the client doesn't know that. So, we end up looking bad to the client because we can't fix the price or a deadline but our competitors can. So, how can you get your sales force to successfully sell a project that uses agile development methods, or a product that is developed using such methods? All the information I found seems to focus on project management and developers. | The key to doing this well is by use of a support contract. Basically, when you first sell the client, you sell them based of your expertise, and you do it waterfall.
That means a contract that sets the scope and a firm dead line. This is what the client wants. The client more or less knows the scope. Waterfall works very well, in a fixed & defined scope environment, I would say it works better than agile in such environments. And in this case it gives the client a level of comfort when the tendency is to be nervous because he has never worked with you before. That’s Ok, Agile is not always better then waterfall. So you have a fixed price contract for X scope. Then you tell the client “ Look, you are going to want to make changes, and you are going to need us to support you post production, let’s set aside 20% of your budget for these things to be used on an as need basis by means of a support contract .” Should a change come up during the project, simply defer it to be handled under the support contact. (Assuming this change would cause a serious disruption to the project) The terms of the support contact are as follows “ Work to be done on a per hour basis, as requested by the client, can be used for change requests or general system support and maintenance .” BAM! You are in Agile. You can then continue to extend the support contact, and simply use the support contact as the means to run new projects. Additionally if these hours are purchased and paid for upfront , we usually give the client a 15% discount. It's Win-Win. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/215562",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/13684/"
]
} |
215,764 | Scenario Currently, I am apart of a health care project whose main requirement is to capture data with unknown attributes using user generated forms by health care providers. The second requirement is that data integrity is key and that the application will be used for 40+ years. We are currently migrating the client's data from the past 40 years from various sources (Paper, Excel, Access, etc...) to the database. Future requirements are: Workflow management of forms Schedule management of forms Security/Role based management Reporting engine Mobile/Tablet support Situation Only 6 months in, the current (contracted) architect/senior programmer has taken the "fast" approach and has designed a poor system. The database is not normalized, the code is coupled, the tiers have no dedicated purpose and data is starting to go missing since he has designed some beans to perform "deletes" on the database. The code base is extremely bloated and there are jobs just to synchronize data since the database is not normalized. His approach has been to rely on backup jobs to restore missing data and doesn't seem to believe in re-factoring. Having presented my findings to the PM, the architect will be removed when his contract ends. I have been given the task to re-architect this application. My team consists of me and one junior programmer. We have no other resources. We have been granted a 6-month requirement freeze in which we can focus on re-building this system. I suggested using a CMS system like Drupal, but for policy reasons at the client's organization, the system must be built from scratch. This is the first time that I will be designing a system with a 40+ lifespan. I have only worked on projects with 3-5 year lifespans, so this situation is very new, yet exciting. Questions What design considerations will make the system more "future proof"? What questions should be asked to the client/PM to make the system more "future proof"? | Data is King I think its a bit unreasonable to expect a web application circa 2013 to be still up and runnable in 2053. Technologies are going to change. Platforms are going to come and go. HTML may be a quaint memory by then. But your data will still be around. So data is your primary focus. As long as your data is still there, people will be able to adapt to whatever new technologies come about. Make sure your data schemes are well thought out, and well suitable for expansion. Take your time spec'ing them out. Regarding the actual applications, your company is probably correct here in having a 'build from scratch' directive. I maintain a couple 10+ year old web apps, and I'm very glad they are not locked into the prevailing CMS systems of 2003. They use home grown, very simple frameworks. I think for something like this you are better off with a very basic framework that you create specficially for the needs of the project. But the reality is, over 40 years, the company will (hopefully) be making quite a few front-end and back end services to adapt to evolving platforms. So given that, I'd target a 5-10 year lifetime for individual user-facing applications. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/215764",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/100991/"
]
} |
215,807 | Whenever I read something or hear someone talking about HTML5 , CSS and JavaScript support, they always refer to Internet Explorer with the version number such as Internet Explorer 6, and Internet Explorer 9. But they only refer to Google Chrome , Firefox , Safari and others without version numbers. Shouldn't they also specify the version number in which certain web technologies are incompatible for other browsers instead of just Internet Explorer? | Well, that mainly has two reasons: 1. IE versions have major differences While other browsers may have no (obvious) difference between versions, Internet Explorer, being the only browser pre-installed (and basically hard-coded) in Windows, has huge differences from version 6 to version 10. Version 10 is almost as good a browser as Chrome or Firefox , while version 6 is an unreliable, slow, good-for-nothing, over-customized browser still used by some non tech-savvy , and it is incompatible with thousands of features introduced after it was created (that was over a decade ago). You can see some compatibility examples here . 2. Being pre-installed has an impact on the market Since IE comes with Windows, and while other OS are gaining up publicity, Windows has been the default for thousands (if not millions) of people, for a long time. Since these people hire programmers to do stuff, like make their websites , programmers are forced to make it look good on the client's screen , even if that doesn't always target the largest audience. Of course, most of us are trying to have a good result on both the client's screen and their clients' screens, but that isn't always easy, if our client has IE 6. (And believe me: some of them will think that you are not a good developer if you ask them to change their browser) So, in conclusion, we tend to always refer to IE with its version, because it does mean something different for development . P.S.: Here is a great blog article about the history of IE and why geeks hate it which does a great presentation on a once good browser. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/215807",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/83729/"
]
} |
215,826 | Recently I started programming in Groovy for a integration testing framework, for a Java project. I use Intellij IDEA with Groovy plug-in and I am surprised to see as a warning for all the methods that are non-static and do not depend on any instance fields. In Java, however, this is not an issue (at least from IDE's point of view). Should all methods that do not depend onto any instance fields be transformed into static functions? If true, is this specific to Groovy or it is available for OOP in general? And why? | Note that IDEA has this inspection for Java as well, it is called Method may be 'static' , This inspection reports any methods which may safely be made static. A method may be static if it doesn't reference any of its class' non static methods and non static fields and isn't overridden in a sub class... Thing is though that for Java code, this inspection is turned off by default (programmer can turn it on at their discretion). The reason for this is most likely that validity / usefulness of such an inspection could be challenged, based on a couple of quite authoritative sources. To start with, official Java tutorial is rather restrictive on when methods should be static: A common use for static methods is to access static fields. Given above, one could argue that turning on by default mentioned inspection doesn't comply with recommended use of static modifier in Java. Besides, there is a couple other sources that go as far as suggesting a judicious approach on using ideas that lie behind this inspection or even discouraging it. See for example Java World article - Mr. Happy Object teaches static methods : Any method that is independent of instance state is a candidate for being declared as static. Note that I say "candidate for being declared as static." Even in the previous example nothing forces you to declare instances() as static. Declaring it as static just makes it more convenient to call since you do not need an instance to call the method. Sometimes you will have methods that don't seem to rely on instance state. You might not want to make these methods static. In fact you'll probably only want to declare them as static if you need to access them without an instance. Moreover, even though you can declare such a method as static, you might not want to because of the inheritance issues that it interjects into your design. Take a look at "Effective Object-Oriented Design" to see some of the issues that you will face... An article at Google testing blog even goes as far as claiming Static Methods are Death to Testability : Lets do a mental exercise. Suppose your application has nothing but static methods. (Yes, code like that is possible to write, it is called procedural programming.) Now imagine the call graph of that application. If you try to execute a leaf method, you will have no issue setting up its state, and asserting all of the corner cases. The reason is that a leaf method makes no further calls. As you move further away from the leaves and closer to the root main() method it will be harder and harder to set up the state in your test and harder to assert things. Many things will become impossible to assert. Your tests will get progressively larger. Once you reach the main() method you no longer have a unit-test (as your unit is the whole application) you now have a scenario test. Imagine that the application you are trying to test is a word processor. There is not much you can assert from the main method... Sometimes a static methods is a factory for other objects. This further exuberates the testing problem. In tests we rely on the fact that we can wire objects differently replacing important dependencies with mocks. Once a new operator is called we can not override the method with a sub-class. A caller of such a static factory is permanently bound to the concrete classes which the static factory method produced. In other words the damage of the static method is far beyond the static method itself. Butting object graph wiring and construction code into static method is extra bad, since object graph wiring is how we isolate things for testing... You see, given above it looks only natural that mentioned inspection is turned off by default for Java. IDE developers would have a really hard time explaining why they think it is so important as to set it on by default, against widely recognized recommendations and best practices. For Groovy, things are quite different. None of arguments listed above apply, particularly the one about testability, as explained eg in Mocking Static Methods in Groovy article at Javalobby: If the Groovy class you're testing makes calls a static method on another Groovy class, then you could use the ExpandoMetaClass which allows you to dynamically add methods, constructors, properties and static methods... This difference is likely why default setting for mentioned inspection is opposite in Groovy. While in Java default "on" would be source of users confusion, in Groovy, an opposite setting could confuse IDE users. "Hey the method doesn't use instance fields, why didn't you warn me about it?" That question would be easy to answer for Java (as explained above), but for Groovy, there is just no compelling explanation. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/215826",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/57792/"
]
} |
215,877 | If I write something like this: var things = mythings
.Where(x => x.IsSomeValue)
.Where(y => y.IsSomeOtherValue) Is this the same as: var results1 = new List<Thing>();
foreach(var t in mythings)
if(t.IsSomeValue)
results1.Add(t);
var results2 = new List<Thing>();
foreach(var t in results1)
if(t.IsSomeOtherValue)
results2.Add(t); Or is there some magic under the covers that works more like this: var results = new List<Thing>();
foreach(var t in mythings)
if(t.IsSomeValue && t.IsSomeOtherValue)
results.Add(t); Or is it something completely different altogether? | LINQ queries are lazy . That means the code: var things = mythings
.Where(x => x.IsSomeValue)
.Where(y => y.IsSomeOtherValue); does very little. The original enumerable ( mythings ) is only enumerated when the resulting enumerable ( things ) is consumed, e.g. by a foreach loop, .ToList() , or .ToArray() . If you call things.ToList() , it is roughly equivalent to your latter code, with perhaps some (usually insignificant) overhead from the enumerators. Likewise, if you use a foreach loop: foreach (var t in things)
DoSomething(t); It is similar in performance to: foreach (var t in mythings)
if (t.IsSomeValue && t.IsSomeOtherValue)
DoSomething(t); Some of the performance advantages of the laziness approach for enumerables (as opposed to calculating all the results and storing them in a list) are that it uses very little memory (since only one result is stored at a time) and that there's no significant up-front cost. If the enumerable is only partially enumerated, this is especially important. Consider this code: things.First(); The way LINQ is implemented, mythings will only be enumerated up to the first element that matches your where conditions. If that element is early on in the list, this can be a huge performance boost (e.g. O(1) instead of O(n)). | {
"source": [
"https://softwareengineering.stackexchange.com/questions/215877",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/66745/"
]
} |
216,252 | Currently I have a command-line application in C called btcwatch . It has a -C option that it can receive as an argument that compares the current price of Bitcoin with a price that was stored beforehand with -S . Example output with this option is: $ btcwatch -vC # -v = verbose
buy: UP $ 32.000000 USD (100.000000 -> 132.000000)
sell: UP $ 16.000000 USD (100.000000 -> 116.000000) The dilemma is whether to use colour for the UP or DOWN string (green and red, respectively). Most command-line applications I know of (apart from git) stay away from colour in their output. In my desire for btcwatch to look and be quite "standard" (use of getopt , Makefiles, etc), I'm not sure if colour would look out of place in this situation. | The appropriate thing to do is to make the coloring optional, default to "off" and control it via a command-line flag. That way, people who don't like it or whose terminal doesn't support it aren't affected, people who like it can use it, and people who really, really like it can define an alias or shortcut to predefine the option. Everybody's happy. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/216252",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/59328/"
]
} |
216,289 | At a new job, I've been getting flagged in code reviews for code like this: PowerManager::PowerManager(IMsgSender* msgSender)
: msgSender_(msgSender) { }
void PowerManager::SignalShutdown()
{
msgSender_->sendMsg("shutdown()");
} I'm told that last method should read: void PowerManager::SignalShutdown()
{
if (msgSender_) {
msgSender_->sendMsg("shutdown()");
}
} i.e., I must put a NULL guard around the msgSender_ variable, even though it is a private data member. It's difficult for me to restrain myself from using expletives to describe how I feel about this piece of 'wisdom'. When I ask for an explanation, I get a litany of horror stories about how some junior programmer, some-year, got confused about how a class was supposed to work and accidentally deleted a member he shouldn't have (and set it to NULL afterwards, apparently), and things blew up in the field right after a product release, and we've "learned the hard way, trust us" that it's better to just NULL check everything . To me, this feels like cargo cult programming , plain and simple. A few well-meaning colleagues are earnestly trying to help me 'get it' and see how this will help me write more robust code, but... I can't help feeling like they're the ones who don't get it. Is it reasonable for a coding standard to require that every single pointer dereferenced in a function be checked for NULL first—even private data members? (Note: To give some context, we make a consumer electronics device, not an air traffic control system or some other 'failure-equals-people-die' product.) EDIT : In the above example, the msgSender_ collaborator isn't optional. If it's ever NULL , it indicates a bug. The only reason it is passed into the constructor is so PowerManager can be tested with a mock IMsgSender subclass. SUMMARY : There were some really great answers to this question, thanks everyone. I accepted the one from @aaronps chiefly due to its brevity. There seems to be fairly broad general agreement that: Mandating NULL guards for every single dereferenced pointer is overkill, but You can side-step the whole debate by using a reference instead (if possible) or a const pointer, and assert statements are a more enlightened alternative to NULL guards for verifying that a function's preconditions are met. | It depends on the 'contract': If PowerManager MUST have a valid IMsgSender , never check for null, let it die sooner. If on the other hand, it MAY have a IMsgSender , then you need to check every time you use, as simple as that. Final comment about the story of the junior programmer, the problem is actually the lack of testing procedures. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/216289",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/1326/"
]
} |
216,371 | Let's say I have a function IsAdmin that checks whether a user is an admin. Let's also say that the admin checking is done by matching user id, name and password against some sort of rule (not important). In my head there are then two possible function signatures for this: public bool IsAdmin(User user);
public bool IsAdmin(int id, string name, string password); I most often go for the second type of signature, thinking that: The function signature gives the reader a lot more info The logic contained inside the function doesn't have to know about the User class It usually results in slightly less code inside the function However I sometimes question this approach, and also realize that at some point it would become unwieldy. If for example a function would map between ten different object fields into a resulting bool I would obviously send in the entire object. But apart from a stark example like that I can't see a reason to pass in the actual object. I would appreciate any arguments for either style, as well as any general observations you might offer. I program in both object oriented and functional styles, so the question should be seen as regarding any and all idioms. | I personally prefer the first method of just IsAdmin(User user) It's much easier to use, and if your criteria for IsAdmin changes at a later date (perhaps based on roles, or isActive), you don't need to rewrite your method signature everywhere. It's also probably more secure as you aren't advertising what properties determine if a user is an Admin or not, or passing around the password property everywhere. And syrion makes a good point that what happens when your id doesn't match the name / password ? The length of code inside a function shouldn't really matter providing the method does its job, and I'd much rather have shorter and simpler application code than helper method code. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/216371",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/42503/"
]
} |
216,406 | I get that /dev/random is a good source of entropy, and is what is usually used-- It's just as I'm reading up on GC, at least in Java, it seems accepted that the garbage collection daemon executes non-deterministically. If this it true, why don't we use the timing of the garbage collection as a source of entropy instead of the variable /dev/random? | "Unspecified" and "random" are two entirely different concepts. The exact workings of a garbage collector are not specified and are up to the garbage collector (usually implemented by a VM of sorts, but not necessarily). Therefore, you have no specified (i.e. deterministic) time at which garbage will be collected. However any given implementation will follow some rules and there is a high chance that two subsequent runs of the same program will have very similar garbage collection patterns. Therefore the actual entropy provided by a garbage collector would be very low (and finding out which parts you can actually use as entropy will be tricky). As a comparison: A HashMap in Java doesn't guarantee any order of retrieval for its members (basically because guaranteeing it would add an overhead that's not worth paying, most of the time). However for a given implementation and a given set of insertions/removals you can definitely calculate the resulting order. Just because there is no guarantee for any given order, doesn't mean that the order is random. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/216406",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/48999/"
]
} |
216,429 | Is it better to have constructors with or without parameters and why? public NewClass( String a, String b, int c) throws IOException
{
//something
} OR public NewClass()
{
//something
} | A constructor should establish the initial invariant of your object, that is, put it in a valid and usable state. If your object is not really usable as an instance of the type it is after construction, it's a sign that you've got a bit of a smear between initialization of the object and use of the object. If it's impossible to provide all the information needed up-front to construct your object properly, you may want to consider some sort of builder to gather state incrementally before instantiating the object. In general, zombie-type objects which have initialization after construction and invalidation before disposal tend to be error-prone, particularly if there is no language support for it, leaving you to enforce the concepts in documentation and assertions. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/216429",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/-1/"
]
} |
216,460 | I'm building a WPF application using the MVVM pattern. Right now, my viewmodels calls the service layer to retrieve models (how is not relevant to the viewmodel) and convert them to viewmodels. I'm using constructor injection to pass the service required to the viewmodel. It's easily testable and works well for viewmodels with few dependencies, but as soon as I try to create viewModels for complex models, I have a constructor with a LOT of services injected in it (one to retrieve each dependencies and a list of all available values to bind to an itemsSource for example). I'm wondering how to handle multiple services like that and still have a viewmodel that I can unit test easily. I'm thinking of a few solutions: Creating a services singleton (IServices) containing all the available services as interfaces. Example: Services.Current.XXXService.Retrieve(), Services.Current.YYYService.Retrieve(). That way, I don't have a huge constructor with a ton of services parameters in them. Creating a facade for the services used by the viewModel and passing this object in the ctor of my viewmodel. But then, I'll have to create a facade for each of my complexe viewmodels, and it might be a bit much... What do you think is the "right" way to implement this kind of architecture ? | In fact, both of these solutions are bad. Creating a services singleton (IServices) containing all the available services as interfaces. Example: Services.Current.XXXService.Retrieve(), Services.Current.YYYService.Retrieve(). That way, I don't have a huge constructor with a ton of services parameters in them. This is essentially the Service Locator Pattern , which is an anti-pattern. If you do this, you will no longer be able to understand what the view model actually depends on without looking at its private implementation, which will make it very difficult to test or refactor. Creating a facade for the services used by the viewModel and passing this object in the ctor of my viewmodel. But then, I'll have to create a facade for each of my complexe viewmodels, and it might be a bit much... This isn't so much an anti-pattern but it is a code smell. Essentially you're creating a parameter object , but the point of the PO refactoring pattern is to deal with parameter sets that are used frequently and in a lot of different places , whereas this parameter would only ever be used once. As you mention, it would create a lot of code bloat for no real benefit, and wouldn't play nice with a lot of IoC containers. In fact, both of the above strategies are overlooking the overall issue, which is that coupling is too high between view models and services . Simply hiding these dependencies in a service locator or parameter object does not actually change how many other objects the view model depends on. Think of how you would unit-test one of these view models. How big is your setup code going to be? How many things need to be initialized in order for it to work? A lot of people starting out with MVVM try to create view models for an entire screen , which is fundamentally the wrong approach. MVVM is all about composition , and a screen with many functions should be composed of several different view models, each of which depends on only one or a few internal models/services. If they need to communicate with each other, you do so via pub/sub (message broker, event bus, etc.) What you actually need to do is refactor your view models so that they have fewer dependencies . Then, if you need to have an aggregate "screen", you create another view model to aggregate the smaller view models. This aggregate view model doesn't have to do very much by itself, so it in turn is also fairly easy to understand and test. If you've done this properly, it should be obvious just from looking at the code, because you'll have short, succinct, specific, and testable view models. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/216460",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/106853/"
]
} |
216,597 | Can anyone explain me what byte stream actually contains? Does it contain bytes (hex data) or binary data or english letters only? I am also confused about the term "raw data". If someone asked me to "reverse the 4 byte data", then what should I assume the data is hex code or binary code? | Byte streams contain, well, bytes. Broken down into what it is actually, it is 8 bits composed of 1s and 0s. If it were representing a number, it would be any number from 0 to 255 (which, I may add, is no coincidence why the 4 numbers in an IP address always range from 0 to 255). Byte streams are usually sophisticated interfaces meant to hide the underlying basic byte array used to hold a circular buffer (you fill up the buffer and wait for someone to empty it, at which time it simply fills up the buffer again). What the heck does that represent? Well, it could represent a text file, or an image, or a live video stream. What it is is entirely dependent upon the context of who is reading it. Hex representation is another way of saying the same thing, though it is sometimes more convenient to manage bytes in terms of their hex representation rather than numbers however it is the same thing. When you're referring to raw data, you are usually referring to byte data. The data comes without a tag saying "I am an image file!" Usually you only deal with raw data when you don't really care what the data represents overall. For example, if I wanted to convert an image to its black and white version, I might say to read an image's raw data and for every 3 bytes read (which would actually be representation of red color, representation of green color, and representation of blue color), add its number value and divide by 3, then write that value 3 times. Essentially what I'd be doing is averaging a pixel's red, green, and blue values and making its gray equivalent pixel from that. However, when you talk about performing operations to data at the level of "byte by byte", you don't really care about the big picture, so to speak. Or, perhaps you wish to save a file in a database, but it asks you to insert its "raw data" in a blob data type. This simply means to convert the data of a file into a large byte array that the database can understand and manage. You'll find that when you retrieve that value from the database, it will be simply one large byte array as you initially provided to the database to begin with. If that data was a file, then you, the programmer, must reinterpret that byte data as if you were reading a file one byte at a time. If someone asked you to "reverse the 4 byte data", I would assume it refers to big-endian vs little-endian interpretation of numbers, which writes numbers starting with the most or least significant byte. It does not matter if a number is represented as big-endian or little-endian, just that all systems reading the number interpret it consistently. This isn't to say that the actual number representation (or hex representation for that matter) is changed, simply that the order in which these 4 bytes make a number should be reversed. So say you have 0x01, 0x02, 0x03, and 0x04. To reverse these, you'd have 0x04, 0x03, 0x02, 0x01 instead. The system would presumably read these 4 bytes in the reverse order and since you've already reversed it, the value is interpreted to be the very same as what was intended in the raw data. I hope that explains it! | {
"source": [
"https://softwareengineering.stackexchange.com/questions/216597",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/105543/"
]
} |
216,605 | I'm diving deeper into developing RESTful APIs and have so far worked with a few different frameworks to achieve this. Of course I've run into the same-origin policy, and now I'm wondering how web servers (rather than web browsers) enforce it. From what I understand, some enforcing seems to happen in the browser's end (e.g., honoring an Access-Control-Allow-Origin header received from a server). But what about the server? For example, let's say a web server is hosting a Javascript web app that accesses an API, also hosted on that server. I assume that the server would enforce the same-origin policy --- so that only the javascript that is hosted on that server would be allowed to access the API. This would prevent someone else from writing a javascript client for that API and hosting it on another site, right? So how would a web server be able to stop a malicious client that would try to make AJAX requests to its API endpoints while claiming to be running javascript that originated from that same web server? What's the way most popular servers (Apache, nginx) protect against this kind of attack? Or is my understanding of this somehow off the mark? Or is the cross-origin policy only enforced on the client end? | The same origin policy is a wholly client-based restriction, and is primarily engineered to protect users , not services . All or most browsers include a command-line switch or configuration option to to turn it off. The SOP is like seat belts in a car: they protect the rider in the car, but anyone can freely choose not to use them. Certainly don't expect a person's seat belt to stop them from getting out of their car and attacking you (or accessing your Web service). Suppose I write a program that accesses your Web service. It's just a program that sends TCP messages that include HTTP requests. You're asking for a server-side mechanism to distinguish between requests made by my program (which can send anything) and requests made by a browser that has a page loaded from a permitted origin. It simply can't be done; my program can always send a request identical to one formed by a Web page. The same-origin policy was invented because it prevents code from one website from accessing credential-restricted content on another site. Ajax requests are by default sent with any auth cookies granted by the target site. For example, suppose I accidentally load http://evil.com/ , which sends a request for http://mail.google.com/ . If the SOP were not in place, and I was signed into Gmail, the script at evil.com could see my inbox. If the site at evil.com wants to load mail.google.com without my cookies, it can just use a proxy server; the public contents of mail.google.com are not a secret (but the contents of mail.google.com when accessed with my cookies are a secret). | {
"source": [
"https://softwareengineering.stackexchange.com/questions/216605",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/103795/"
]
} |
216,798 | I am trying to understand the point of Odata and when it would make sense. Right now how I work is I use ASP.NET and MVC/WebApi controller to serialize/deserialize objects into JSON and have javascript do something with it. From what I can tell the benefit of OData is being able to query directly from the URL ... But since I am writing the client and server code there is no need for that. Would anyone ever parse the results of a ODaya query in javascript?? Maybe OData is more about providing a generic endpoint for ALL clients to get detailed information from a query that JSON does not provide ? So if I was a provider of data then I suppose that is what odata is for ? Help me understand the purpose and use of REST/JSON/ODATA. | JSON is just a data-interchange format based on JavaScript. REST is an architecture style whereas OData is a specific implemenation of REST designed to generate and consume data, which supports two formats, AtomPub and JSON. So the difference between JSON with plain REST and OData are the options in OData for data manipulation eg, if we query data using OData protocol, we can specify the below options in the URI, $orderby $top $skip $filter $format $select We can do projection, link the resources, etc. and all these options are available out of the box. Now imagine if we had to provide all these features in our own REST service then we would have to, Implement all of them Create our own convention/keywords for different operations It is not only lot of work but also leads to inconsistencies and creates a learning curve for our data consumers. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/216798",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/51785/"
]
} |
216,998 | You maintain an existing application with an established user base. Over time it is decided that the current password hashing technique is outdated and needs to be upgraded. Furthermore, for UX reasons, you don't want existing users to be forced to update their password. The whole password hashing update needs to happen behind the screen. Assume a 'simplistic' database model for users that contains: ID Email Password How does one go around to solving such a requirement? My current thoughts are: create a new hashing method in the appropriate class update the user table in the database to hold an additional password
field Once a user successfully logs in using the outdated password hash, fill the second password field with the updated hash This leaves me with the problem that I cannot reasonable differentiate between users who have and those who have not updated their password hash and thus will be forced to check both. This seems horribly flawed. Furthermore this basically means that the old hashing technique could be forced to stay indefinitely until every single user has updated their password. Only at that moment could I start removing the old hashing check and remove the superfluous database field. I'm mainly looking for some design tips here, since my current 'solution' is dirty, incomplete and what not, but if actual code is required to describe a possible solution, feel free to use any language. | I would suggest adding a new field, "hash_method", with perhaps a 1 to signify the old method and a 2 to signify the new method. Reasonably speaking, if you care about this sort of thing and your application is relatively long-lived (which it apparently already is), this is probably going to happen again as cryptography and information security is such an evolving, rather unpredictable field. There was a time when a simple run through MD5 was standard, if hashing was used at all! Then one might think they should use SHA1, and now there's salting, global salt + individual random salt, SHA3, different methods of crypto-ready random number generation...this isn't going to just 'stop', so you might as well fix this in an extensible, repeatable way. So, lets say now you have something like (in pseudo-javascript for simplicity, I hope): var user = getUserByID(id);
var tryPassword = hashPassword(getInputPassword());
if (user.getPasswordHash() == tryPassword)
{
// Authenticated!
}
function hashPassword(clearPassword)
{
// TODO: Learn what "hash" means
return clearPassword + "H@$I-I";
} Now realizing there is a better method, you just need to give a minor refactoring: var user = getUserByID(id);
var tryPassword = hashPassword(getInputPassword(), user.getHashingMethod());
if (user.getPasswordHash() == tryPassword)
{
// Authenticated!
}
function hashPassword(clearPassword, hashMethod)
{
// Note: Hash doesn't mean what we thought it did. Oops...
var hash;
if (hashMethod == 1)
{
hash = clearPassword + "H@$I-I";
}
else if (hashMethod == 2)
{
// Totally gonna get it right this time.
hash = SuperMethodTheNSASaidWasAwesome(clearPassword);
}
return hash;
} No secret agents or programmers were harmed in the production of this answer. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/216998",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/33514/"
]
} |
218,011 | Situation Earlier this evening I gave an answer to a question on StackOverflow. The question: Editing of an existing object should be done in repository layer or in service? For example if I have a User that has debt. I want to change his debt. Should I do it in UserRepository or in service for example BuyingService by getting an object, editing it and saving it ? My answer: You should leave the responsibility of mutating an object to that same object and use the repository to retrieve this object. Example situation: class User {
private int debt; // debt in cents
private string name;
// getters
public void makePayment(int cents){
debt -= cents;
}
}
class UserRepository {
public User GetUserByName(string name){
// Get appropriate user from database
}
} A comment I received: Business logic should really be in a service. Not in a model. What does the internet say? So, this got me searching since I've never really (consciously) used a service layer.
I started reading up on the Service Layer pattern and the Unit Of Work pattern but so far I can't say I'm convinced a service layer has to be used. Take for example this article by Martin Fowler on the anti-pattern of an Anemic Domain Model: There are objects, many named after the nouns in the domain space, and these objects are connected with the rich relationships and structure that true domain models have. The catch comes when you look at the behavior, and you realize that there is hardly any behavior on these objects, making them little more than bags of getters and setters. Indeed often these models come with design rules that say that you are not to put any domain logic in the the domain objects. Instead there are a set of service objects which capture all the domain logic. These services live on top of the domain model and use the domain model for data. (...) The logic that should be in a domain object is domain logic - validations, calculations, business rules - whatever you like to call it. To me, this seemed exactly what the situation was about: I advocated the manipulation of an object's data by introducing methods inside that class that do just that. However I realize that this should be a given either way, and it probably has more to do with how these methods are invoked (using a repository). I also had the feeling that in that article (see below), a Service Layer is more considered as a façade that delegates work to the underlying model, than an actual work-intensive layer. Application Layer [his name for Service Layer]: Defines the jobs the software is supposed to do and directs the expressive domain objects to work out problems. The tasks this layer is responsible for are meaningful to the business or necessary for interaction with the application layers of other systems. This layer is kept thin. It does not contain business rules or knowledge, but only coordinates tasks and delegates work to collaborations of domain objects in the next layer down. It does not have state reflecting the business situation, but it can have state that reflects the progress of a task for the user or the program. Which is reinforced here : Service interfaces. Services expose a service interface to which all inbound messages are sent. You can think of a service interface as a façade that exposes the business logic implemented in the application (typically, logic in the business layer) to potential consumers. And here : The service layer should be devoid of any application or business logic and should focus primarily on a few concerns. It should wrap Business Layer calls, translate your Domain in a common language that your clients can understand, and handle the communication medium between server and requesting client. This is a serious contrast to other resources that talk about the Service Layer: The service layer should consist of classes with methods that are units of work with actions that belong in the same transaction. Or the second answer to a question I've already linked: At some point, your application will want some business logic. Also, you might want to validate the input to make sure that there isn't something evil or nonperforming being requested. This logic belongs in your service layer. "Solution"? Following the guidelines in this answer , I came up with the following approach that uses a Service Layer: class UserController : Controller {
private UserService _userService;
public UserController(UserService userService){
_userService = userService;
}
public ActionResult MakeHimPay(string username, int amount) {
_userService.MakeHimPay(username, amount);
return RedirectToAction("ShowUserOverview");
}
public ActionResult ShowUserOverview() {
return View();
}
}
class UserService {
private IUserRepository _userRepository;
public UserService(IUserRepository userRepository) {
_userRepository = userRepository;
}
public void MakeHimPay(username, amount) {
_userRepository.GetUserByName(username).makePayment(amount);
}
}
class UserRepository {
public User GetUserByName(string name){
// Get appropriate user from database
}
}
class User {
private int debt; // debt in cents
private string name;
// getters
public void makePayment(int cents){
debt -= cents;
}
} Conclusion All together not much has changed here: code from the controller has moved to the service layer (which is a good thing, so there is an upside to this approach). However this doesn't look like it had anything to do with my original answer. I realize design patterns are guidelines, not rules set in stone to be implemented whenever possible. Yet I have not found a definitive explanation of the service layer and how it should be regarded. Is it a means to simply extract logic from the controller and put it inside a service instead? Is it supposed to form a contract between the controller and the domain? Should there be a layer between the domain and the service layer? And, last but not least: following the original comment Business logic should really be in a service. Not in a model. Is this correct? How would I introduce my business logic in a service instead of the model? | In order to define what a service's responsibilities are, you first need to define what a service is. Service is not a canonical or generic software term. In fact, the suffix Service on a class name is a lot like the much-maligned Manager : It tells you almost nothing about what the object actually does . In reality, what a service ought to do is highly architecture-specific: In a traditional layered architecture, service is literally synonymous with business logic layer . It's the layer between UI and Data. Therefore, all business rules go into services. The data layer should only understand basic CRUD operations, and the UI layer should deal only with the mapping of presentation DTOs to and from the business objects. In an RPC-style distributed architecture (SOAP, UDDI, BPEL, etc.), the service is the logical version of a physical endpoint . It is essentially a collection of operations that the maintainer wishes to provide as a public API. Various best practices guides explain that a service operation should in fact be a business-level operation and not CRUD, and I tend to agree. However, because routing everything through an actual remote service can seriously hurt performance, it's normally best not to have these services actually implement the business logic themselves; instead, they should wrap an "internal" set of business objects. A single service might involve one or several business objects. In an MVP/MVC/MVVM/MV* architecture, services don't exist at all. Or if they do, the term is used to refer to any generic object that can be injected into a controller or view model. The business logic is in your model . If you want to create "service objects" to orchestrate complicated operations, that's seen as an implementation detail. A lot of people, sadly, implement MVC like this, but it's considered an anti-pattern ( Anemic Domain Model ) because the model itself does nothing, it's just a bunch of properties for the UI. Some people mistakenly think that taking a 100-line controller method and shoving it all into a service somehow makes for a better architecture. It really doesn't; all it does is add another, probably unnecessary layer of indirection. Practically speaking, the controller is still doing the work, it's just doing so through a poorly named "helper" object. I highly recommend Jimmy Bogard's Wicked Domain Models presentation for a clear example of how to turn an anemic domain model into a useful one. It involves careful examination of the models you're exposing and which operations are actually valid in a business context. For example, if your database contains Orders, and you have a column for Total Amount, your application probably shouldn't be allowed to actually change that field to an arbitrary value, because (a) it's history and (b) it's supposed to be determined by what's in the order as well as perhaps some other time-sensitive data/rules. Creating a service to manage Orders does not necessarily solve this problem, because user code can still grab the actual Order object and change the amount on it. Instead, the order itself should be responsible for ensuring that it can only be altered in safe and consistent ways. In DDD, services are meant specifically for the situation when you have an operation that doesn't properly belong to any aggregate root . You have to be careful here, because often the need for a service can imply that you didn't use the correct roots. But assuming you did, a service is used to coordinate operations across multiple roots, or sometimes to handle concerns that don't involve the domain model at all (such as, perhaps, writing information to a BI/OLAP database). One notable aspect of the DDD service is that it is allowed to use transaction scripts . When working on large applications, you're very likely to eventually run into instances where it's just way easier to accomplish something with a T-SQL or PL/SQL procedure than it is to fuss with the domain model. This is OK, and it belongs in a Service. This is a radical departure from the layered-architecture definition of services. A service layer encapsulates domain objects; a DDD service encapsulates whatever isn't in the domain objects and doesn't make sense to be. In a Service-Oriented Architecture, a service is considered to be the technical authority for a business capability. That means that it is the exclusive owner of a certain subset of the business data and nothing else is allowed to touch that data - not even to just read it. By necessity, services are actually an end-to-end proposition in an SOA. Meaning, a service isn't so much a specific component as an entire stack , and your entire application (or your entire business) is a set of these services running side-by-side with no intersection except at the messaging and UI layers. Each service has its own data, its own business rules, and its own UI. They don't need to orchestrate with each other because they are supposed to be business-aligned - and, like the business itself, each service has its own set of responsibilities and operates more or less independently of the others. So, by the SOA definition, every piece of business logic anywhere is contained within the service, but then again, so is the entire system . Services in an SOA can have components , and they can have endpoints , but it's fairly dangerous to call any piece of code a service because it conflicts with what the original "S" is supposed to mean. Since SOA is generally pretty keen on messaging, the operations that you might have packaged in a service before are generally encapsulated in handlers , but the multiplicity is different. Each handler handles one message type, one operation. It's a strict interpretation of the Single Responsibility Principle , but makes for great maintainability because every possible operation is in its own class. So you don't really need centralized business logic, because commands represents business operations rather than technical ones. Ultimately, in any architecture you choose, there is going to be some component or layer that has most of the business logic. After all, if business logic is scattered all over the place then you just have spaghetti code. But whether or not you call that component a service , and how it's designed in terms of things like number or size of operations, depends on your architectural goals. There's no right or wrong answer, only what applies to your situation. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/218011",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/82513/"
]
} |
218,080 | I'm currently implementing an HTTP API, my first ever. I've been spending a lot of time looking at the Wikipedia page for HTTP status codes, because I'm determined to implement the right codes for the right situations. Listed on that page is a code with number 420, which is a custom code that Twitter used to use for rate limiting. There is already a code for rate limiting, though. It's 429. This led me to wonder why they would set a custom one, when there is already a use case. Is that just being cute? And if so, then which circumstances would make it acceptable to return a different status code, and what, if any problems may clients have with it? I read somewhere that Mozilla doesn't implement the joke 418: I’m a teapot response, which makes me think that clients choose which status codes they implement. If that's true, then I can imagine Twitter's funny little enhance your calm code being problematic. Unless I'm mistaken, and we can appropriate any code number to mean whatever we like, and that only convention dictates that 404 means not found, and 429 means take it easy. | The whole of the Internet is built on conventions. We call them RFCs. While nobody will come and arrest you if you violate an RFC, you do run the risk that your service will not interoperate with the rest of the world. And if that happens, you run the risk of your startup not getting any customers, your business getting bad press, your stockholders revolting, your getting laid off permanently, etc. HTTP status codes have their own IANA registry , each one traceable back to the RFC (or in one case, I-D) that defined it. In the particular case of Twitter's strange 420 status code versus the standard 429 status code defined in RFC 6585 , the most likely explanation is that the latter was only recently defined; the RFC dates to April 2012. We see that Twitter only uses 420 in the previous deprecated version 1 of its API; the current API version 1.1 actually uses the 429 status code . So it's clear that Twitter needed a status code for this and defined their own; once a standard one was available they switched to it. Best practice, of course, is to stick as closely to the standards as possible. When you read RFCs, you will almost always find words like "MUST" and "SHOULD"; these have specific meanings when you are building your application, which you can find in RFC 2119 . | {
"source": [
"https://softwareengineering.stackexchange.com/questions/218080",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/86693/"
]
} |
218,306 | In my education I have been told that it is a flawed idea to expose actual primary keys (not only DB keys, but all primary accessors) to the user. I always thought it to be a security problem (because an attacker could attempt to read stuff not their own). Now I have to check if the user is allowed to access anyway, so is there a different reason behind it? Also, as my users have to access the data anyway I will need to have a public key for the outside world somewhere in between. Now that public key has the same problems as the primary key, doesn't it? There has been the request of an example on why do that anyway, so here is one.
Keep in mind that the question is meant to be about the principle itself not only if it applies in this example. Answers addressing other situations are explicitly welcome. Application (Web, Mobile) that handles activity, has multiple UIs and at least one automated API for intersystem communication (e.G. the accounting department wants to know how much to charge the customer based on what has been done).
The Application has multiple customers so separation of their data (logically, the data is stored in the same DB) is a must have of the system. Each request will be checked for validity no matter what. Activity is very fine granular so it is together in some container object, lets call it "Task". Three usecases: User A wants to send User B to some Task so he sends him a link (HTTP) to get some Activity done there. User B has to go outside the building so he opens the Task on his mobile device. Accounting wants to charge the customer for the Task, but uses a third party accounting system that automatically loads the Task / Activity by some code that refers to the REST - API of the Application Each of the usecases requires (or gets easier if) the agent to have some addressable identifier for the Task and the Activity. | Also, as my users have to access the data anyway I will need to have a public key for the outside world somewhere in between. Exactly. Take the stateless HTTP, who would otherwise not know what resource it should request: it exposes your question's ID 218306 in the URL. Perhaps you're actually wondering whether an exposed identifier may be predictable ? The only places where I've heard a negative answer to that, used the rationale: "But they can change the ID in the URL!" . So they used GUIDs instead of implementing proper authorization. I can imagine one situation where you don't want your identifiers to be predictable: resource harvesting. If you have an site that publicly hosts certain resources others may be interesting in, and you host them like /images/n.jpg or /videos/n.mp4 where n just is an incrementing number, anyone looking at traffic to and from your website can harvest all your resources. So, to directly answer your question: no, it is not bad to directly "expose" identifiers that only have meaning to your program, usually it is even required for your program to succesfully operate. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/218306",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/36528/"
]
} |
218,331 | choosealicense.com claims that the MIT license is “A permissive license that is short and to the point. It lets people do anything with your code with proper attribution and without warranty” (emphasis mine). Reading the license, though, I don’t see anything claiming that attribution to the original author has to be anywhere, so where are they taking that from? | The fourth paragraph says that the copyright notice in the second paragraph must be reproduced. Users of the licence substitute the [fullname] with their actual name. That is what constitutes "proper attribution" in the mind of the MIT: every user of the software can find out who wrote it if they want to. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/218331",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/94821/"
]
} |
218,388 | I am a software engineer at a medium sized company. We have a fairly robust testing platform running on TeamCity. It does unit tests on every checkin, and a daily unit test/BVT run. The problem is - we have a great deal of broken unit tests. Quite often, I bring up the pointlessness of unit tests if they are constantly breaking and unmaintained. Being unable to see if a change has caused a regression removes most of the value of a unit testing platform. I would like to get a seed planted that will create a culture of good habits - fixing tests when they're broken, seeing them as valuable, prioritizing the fixing of tests along with other work. I've tried bribery (baked goods!), just plain asking, and speaking to team leads. Everyone says that it's a good idea, but I see to be the only one doing anything about it. What is the best way to get started on encouraging others to fix their tests, and prioritize test fixing within their sprints? If there is a less subjective way to ask this, I would be happy to accept any tips. | Make it so that's impossible to actually release anything without fixing the tests. Fail the build if any tests fail. Fail the build if any tests are ignored. Fail the build if test coverage goes below a certain level (so people can't just delete tests to work around it). Use the CI server to do your release builds, and only allow builds from the server's build drop to be promoted to UAT/staging/production/whatever. The fact of the matter is, if your build is broken for more than about 15 minutes at a time (and that includes failing tests), then you aren't doing continuous integration . The "nuclear option" is to have your source control server refuse commits/checkins from any user other than the one who broke the build. Obviously an admin needs to be able to override this temporarily if said person goes on holiday - but, if everybody knows that the whole team is screwed until they fix their tests, then they'll resolve it damn quick. A good policy (which is even better when it's automated) is to revert the source to the last known stable commit after 15 minutes of the build failing. In other words, if you can't fix it, or don't know what caused the build or test to break, then revert it and work locally until it's resolved - never ever make other developers twiddle their thumbs while you grind away at a problem they don't care about. P.S. If you already have a lot of tests failing, you can use a "trailing threshold" in CI. Set it up so that the build only fails if there are more test failures than last time. This, along with a coverage rule, will force developers to eventually improve the test situation if they want to be able to keep working. P.P.S. I realize this might seem draconian to some, but it's all down your culture. If you get to a point where people just don't leave the build broken or tests failing (my team almost never does, although I occasionally have to remind them), then you don't need to continue with the strictest set of rules. Although IMO you should always fail the build on a broken unit test. Integration/browser tests can fail sometimes. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/218388",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/97843/"
]
} |
218,458 | I've always liked the idea of having multiple inheritance supported in a language. Most often though it's intentionally forgone, and the supposed "replacement" is interfaces. Interfaces simply do not cover all the same ground multiple inheritance does, and this restriction can occasionally lead to more boilerplate code. The only basic reason I have ever heard for this is the diamond problem with base classes. I just can't accept that. To me, it comes off an awful lot like, "Well, it's possible to screw it up, so it's automatically a bad idea." You can screw up anything in a programming language though, and I mean anything. I just cannot take this seriously, at least not without a more thorough explanation. Just being aware of this problem is 90% of the battle. Furthermore I think I heard something years ago about a general-purpose work-around involving an "envelope" algorithm or something like that (does this ring a bell, anyone?). Concerning the diamond problem, the only potentially genuine problem I can think of is if you're trying to use a third-party library and can't see that two seemingly unrelated classes in that library have a common base class, but in addition to documentation, a simple language feature could, let's say, require that you specifically declare your intent to create a diamond before it'll actually compile one for you. With such a feature, any creation of a diamond is either intentional, reckless, or because one is unaware of this pitfall. So that all being said...Is there any real reason most people hate multiple inheritance, or is it all just a bunch of hysteria that causes more harm than good? Is there something that I am not seeing here? Thank you. Example Car extends WheeledVehicle, KIASpectra extends Car and Electronic, KIASpectra contains Radio. Why doesn't KIASpectra contain Electronic? Because it is an Electronic. Inheritance vs. composition should always be an is-a relationship vs. a has-a relationship. Because it is an Electronic. There are wires, circuit boards, switches, etc. all up and down that thing. Because it is an Electronic. If your battery goes dead in the winter, you're in just as much trouble as if all your wheels suddenly went missing. Why not use interfaces? Take #3, for instance. I don't want to write this over and over again, and I really don't want to create some bizarre proxy helper class to do this either: private void runOrDont()
{
if (this.battery)
{
if (this.battery.working && this.switchedOn)
{
this.run();
return;
}
}
this.dontRun();
} (We're not getting into whether that implementation is good or bad.) You can imagine how there may be several of these functions associated with Electronic that are not related to anything in WheeledVehicle, and vice-versa. I wasn't sure whether to settle down on that example or not, since there is room for interpretation there. You could also think in terms of Plane extending Vehicle and FlyingObject and Bird extending Animal and FlyingObject, or in terms of a much purer example. | In many cases, people use inheritance to provide a trait to a class. For example think of a Pegasus. With multiple inheritance you might be tempted to say the Pegasus extends Horse and Bird because you've classified the Bird as an animal with wings. However, Birds have other traits that Pegasi don't. For example, birds lay eggs, Pegasi have live birth. If inheritance is your only means of passing sharing traits then there's no way to exclude the egg laying trait from the Pegasus. Some languages have opted to make traits an explicit construct within the language. Other's gently guide you in that direction by removing MI from the language. Either way, I can't think of a single case where I thought "Man I really need MI to do this properly". Also let's discuss what inheritance REALLY is. When you inherit from a class, you take a dependency on that class, but also you have to support the contracts that class supports, both implicit and explicit. Take the classic example of a square inheriting from a rectangle. The rectangle exposes a length and width property and also a getPerimeter and getArea method. The square would override length and width so that when one is set the other is set to match getPerimeter and getArea would work the same (2*length+2*width for perimeter and length*width for area). There is a single test case that breaks if you substitute this implementation of a square for a rectangle. var rectangle = new Square();
rectangle.length= 5;
rectangle.width= 6;
Assert.AreEqual(30, rectangle.GetArea());
//Square returns 36 because setting the width clobbers the length It's tough enough to get things right with a single inheritance chain. It gets even worse when you add another to the mix. The pitfalls I mentioned with the Pegasus in MI and the Rectangle/Square relationships are both the results of a inexperienced design for classes. Basically avoiding multiple inheritance is a way to help beginning developers avoid shooting themselves in the foot. Like all design principles, having discipline and training based on them allows you to in time discover when it's okay to break from them. See the Dreyfus Model of Skill Acquisition , at the Expert level, your intrinsic knowledge transcends reliance on maxims/principles. You can "feel" when a rule doesn't apply. And I do agree that I somewhat cheated with a "real world" example of why MI is frowned upon. Let's look at a UI framework. Specifically let's look at a few widgets that might at first brush look like they are simply a combination of two others. Like a ComboBox. A ComboBox is a TextBox that has a supporting DropDownList. I.e. I can type in a value, or I can select from a pre-ordained list of values. A naive approach would be to inherit the ComboBox from TextBox and DropDownList. But your Textbox derives its value from what the user has typed. While the DDL gets its value from what the user selects. Who takes precedent? The DDL might have been designed to verify and reject any input that wasn't in its original list of values. Do we override that logic? That means we have to expose the internal logic for inheritors to override. Or worse, add logic to the base class that is only there in order to support a subclass (violating the Dependency Inversion Principle ). Avoiding MI helps you sidestep this pitfall altogether. And might lead to you extracting common, reusable traits of your UI widgets so that they can be applied as needed. An excellent example of this is the WPF Attached Property which allows a framework element in WPF to provide a property that another framework element can use without inheriting from the parent framework element. For example a Grid is a layout panel in WPF and it has Column and Row attached properties that specify where a child element should be placed in the grid's arrangement. Without attached properties, if I want to arrange a Button within a Grid, the Button would have to derive from Grid so it could have access to the Column and Row properties. Developers took this concept further and used attached properties as a way of componentizing behavior (for example here is my post on making a sortable GridView using attached properties written before WPF included a DataGrid). The approach has been recognized as a XAML Design Pattern called Attached Behaviors . Hopefully this provided a little more insight on why Multiple Inheritance is typically frowned upon. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/218458",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/100669/"
]
} |
218,489 | I'm an extremely novice web programmer working at a 2 person shop building websites, and we're in the process of writing a simple website template system in PHP from scratch. My manager is entirely self taught. He has been insisting that we put our HTML directly quoted in variables, mixing our back end code with the front end stuff. $foo = "<html>
<body>
<div class=\"bar\">" . $bar . "</div>
</body>
</html>";
echo $foo; He claims that it is easier to work with this way. I insisted that we should at least use PHP tags in our HTML, rather than the HTML in the PHP. <html>
<body>
<div class="bar"><?php echo $bar; ?></div>
</body>
</html> I feel like this is a real pain in the ass to read and it just looks like a mess. I think it would be a lot easier to separate the logic from the presentation, but he is certain that his way is best. How can I explain the importance of this so that he will understand? I can't put it into words, exactly. EDIT: Think you guys could leave a comment when you're down voting me? I have no idea how to fix my question if you just down vote it and leave without suggesting improvements. | In many cases, people use inheritance to provide a trait to a class. For example think of a Pegasus. With multiple inheritance you might be tempted to say the Pegasus extends Horse and Bird because you've classified the Bird as an animal with wings. However, Birds have other traits that Pegasi don't. For example, birds lay eggs, Pegasi have live birth. If inheritance is your only means of passing sharing traits then there's no way to exclude the egg laying trait from the Pegasus. Some languages have opted to make traits an explicit construct within the language. Other's gently guide you in that direction by removing MI from the language. Either way, I can't think of a single case where I thought "Man I really need MI to do this properly". Also let's discuss what inheritance REALLY is. When you inherit from a class, you take a dependency on that class, but also you have to support the contracts that class supports, both implicit and explicit. Take the classic example of a square inheriting from a rectangle. The rectangle exposes a length and width property and also a getPerimeter and getArea method. The square would override length and width so that when one is set the other is set to match getPerimeter and getArea would work the same (2*length+2*width for perimeter and length*width for area). There is a single test case that breaks if you substitute this implementation of a square for a rectangle. var rectangle = new Square();
rectangle.length= 5;
rectangle.width= 6;
Assert.AreEqual(30, rectangle.GetArea());
//Square returns 36 because setting the width clobbers the length It's tough enough to get things right with a single inheritance chain. It gets even worse when you add another to the mix. The pitfalls I mentioned with the Pegasus in MI and the Rectangle/Square relationships are both the results of a inexperienced design for classes. Basically avoiding multiple inheritance is a way to help beginning developers avoid shooting themselves in the foot. Like all design principles, having discipline and training based on them allows you to in time discover when it's okay to break from them. See the Dreyfus Model of Skill Acquisition , at the Expert level, your intrinsic knowledge transcends reliance on maxims/principles. You can "feel" when a rule doesn't apply. And I do agree that I somewhat cheated with a "real world" example of why MI is frowned upon. Let's look at a UI framework. Specifically let's look at a few widgets that might at first brush look like they are simply a combination of two others. Like a ComboBox. A ComboBox is a TextBox that has a supporting DropDownList. I.e. I can type in a value, or I can select from a pre-ordained list of values. A naive approach would be to inherit the ComboBox from TextBox and DropDownList. But your Textbox derives its value from what the user has typed. While the DDL gets its value from what the user selects. Who takes precedent? The DDL might have been designed to verify and reject any input that wasn't in its original list of values. Do we override that logic? That means we have to expose the internal logic for inheritors to override. Or worse, add logic to the base class that is only there in order to support a subclass (violating the Dependency Inversion Principle ). Avoiding MI helps you sidestep this pitfall altogether. And might lead to you extracting common, reusable traits of your UI widgets so that they can be applied as needed. An excellent example of this is the WPF Attached Property which allows a framework element in WPF to provide a property that another framework element can use without inheriting from the parent framework element. For example a Grid is a layout panel in WPF and it has Column and Row attached properties that specify where a child element should be placed in the grid's arrangement. Without attached properties, if I want to arrange a Button within a Grid, the Button would have to derive from Grid so it could have access to the Column and Row properties. Developers took this concept further and used attached properties as a way of componentizing behavior (for example here is my post on making a sortable GridView using attached properties written before WPF included a DataGrid). The approach has been recognized as a XAML Design Pattern called Attached Behaviors . Hopefully this provided a little more insight on why Multiple Inheritance is typically frowned upon. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/218489",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/58713/"
]
} |
218,801 | One of the advantages of using a DVCS is the edit-commit-merge workflow (over edit-merge-commit often enforced by a CVCS). Allowing each unique change to be recorded in the repository independent of merges ensures the DAG accurately reflects the true pedigree of the project. Why do so many websites talk about wanting to "avoid merge commits"? Doesn't merging pre-commit or rebasing post-merge make it more difficult to isolate regressions, revert past changes, etc.? Point of clarification: The default behavior for a DVCS is to create merge commits. Why do so many places talk about a desire to see a linear development history that hides these merge commits? | People want to avoid merge commits because it makes the log prettier. Seriously. It looks like the centralized logs they grew up with, and locally they can do all their development in a single branch. There are no benefits aside from those aesthetics, and several drawbacks in addition to those you mentioned, like making it conflict-prone to pull directly from a colleague without going through the "central" server. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/218801",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/96319/"
]
} |
219,028 | How do I make sure my REST API only responds to requests generated by trusted clients, in my case my own mobile applications? I want to prevent unwanted requests coming from other sources. I don't want users to fill in a serial key or whatever, it should happen behind the scenes, upon installation, and without any user interaction required. As far as I know, HTTPS is only to validate the server you are communicating with is who it says it is. I'm ofcourse going to be using HTTPS to encrypt the data. Is there a way to accomplish this? Update: The user can perform read-only actions, which do not require the user to be logged in, but they can also perform write actions, which do require the user to be logged in (Authentication by Access Token). In both cases I want the API to respond to requests coming only from trusted mobile applications. The API will also be used for registering a new account through the mobile application. Update 2: It seems like there are multiple answers to this, but I honestly don't know which one to flag as the answer. Some say it can be done, some say it can't. | You Can't. You can never verify an entity, any entity , be it a person, hardware client or software client. You can only verify that what they are telling you is correct, then assume honesty . For example, how does Google know it is I'm logging into my Gmail account? They simply ask me for a user name and password, verify that , then assume honesty because who else would have that info? At some point Google decided that this was not enough and added behavioral verification (looking for odd behavior) but that is still relying on the person to do the behavior , then validating the behavior . This is exactly the same thing with validating the Client. You can only validate the behavior of the Client, but not the Client itself. So with SSL, you can verify the Client has a valid cert or not, So one can simply install your App, get the Cert, then run all new code. So the question is: Why is this so critical? If this is a real concern, I would question your choice of a fat client. Perhaps you should go with a web App (so you don't have to expose your API). Also see: Defeating SSL Certificate Validation for Android Applications and : How safe are client SSL certificates in a mobile app? | {
"source": [
"https://softwareengineering.stackexchange.com/questions/219028",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/106175/"
]
} |
219,069 | I have been tasked with teaching other teams a new codebase, but I keep running into an issue. Whenever I go to actually walk through the code with people, we don't get very far before the entire exercise devolves into a bikeshedding (members of an organisation giving disproportionate weight to trivial issues) exercise. Since they don't know the codebase, but think they need to help improve it, they focus on the things they can understand: Why is that named that? (2 minutes to explain why it's named that, 10+ minutes debating a new name) Why is that an abstract base class rather than an interface? (2 minutes to explain, 10+ minutes debating the relative merits of this decision) ...and so on. Now, don't get me wrong - good names and good, consistent design are important, but we never get to discussing what the code actually does or how the system is designed in any meaningful way. I've done some meeting refereeing to get people out of these tangents, but they're gone - distracted by what the code will/should be when their pet triviality is fixed, and they miss the bigger picture. So we try again later (or with a different part of the codebase) and since people didn't get enough knowledge to overcome the bikeshedding effect, it repeats. I've tried smaller groups, bigger groups, code, whiteboarding, visio diagrams, giant walls of text, letting them just argue it to death, cutting arguments short immediately... some help more than others, but nothing works . Hell, I even tried having other people from my team explain it because I thought it might be that I'm just bad at explaining things. So how do you educate other programmers enough that they stop fixating on trivialities and can meaningfully contribute to the design? | I think the problem is the task: "I have been tasked with teaching other teams a new codebase". You have been given the wrong job, or maybe misinterpreted the job you've been given. By presenting at the code level, you invite code level thinking. Start at the system level and present the design and the design choices that were made. Don't allow extended discussion: you are not reviewing it. Do allow questions: you do want them to understand the system. If people "would have done it differently", fine. Maybe agree. Or not. But move on. It's the way it is right now. When you get to the code level, you will have already got them primed with the system terminology. The names (I assume) will make sense. Same as above: no extended discussion, questions for understanding. Move on. Now set some class problems to work through. How can we make enhancement X? Choose something non-trivial that "goes with the flow" of the system design, and work through what you would change. They should be getting the rationale of the system now. Choose another enhancement that could break the system if done wrong, and show how it can be done right. That should be an Ah Ha moment for them. Some might even beat you to it! It's a tough gig, especially after the false start you've had. Sounds like you've invested a lot of time and effort already, and maybe there is a bit of a me versus them feeling. 'Fess up, and start again. We assume that they are smart people. Give them the challenge of thinking at the higher level. And break up the groups that already exist by selecting different cross sections of teams for the new sessions. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/219069",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/51654/"
]
} |
219,191 | I am a novice JavaScripter and have no real knowledge of what goes on inside the V8 engine. Having said that, I am really enjoying my early forays into the node.js environment but I find that I am constantly using events.EventEmitter() as a means to emit global events so that I can structure my programs to fit a notifier-observer pattern similar to what I would write in say an Objective-C or Python program. I find myself always doing things like this: var events = require('events');
var eventCenter = new events.EventEmitter();
eventCenter.on('init', function() {
var greeting = 'Hello World!';
console.log('We're in the init function!);
eventCenter.emit('secondFunction', greeting);
});
eventCenter.on('secondFunction', function(greeting) {
console.log('We're in the second function!);
console.log(greeting);
eventCenter.emit('nextFunction');
});
eventCenter.on('nextFunction', function {
/* do stuff */
});
eventCenter.emit('init'); So in effect I'm just structuring 'async' node.js code into code that does things in the order I expect, instead I'm kind of "coding backwards" if that makes sense. Would there be any difference in doing this in a callback-heavy manner, either performance-wise or philosophy-wise? Is it better to do the same thing using callbacks instead of events? | The nice thing about callbacks is there's no global state there, and passing parameters to them is trivial. If you have a function download(URL, callback: (FileData)->void) then you can know that's a self-contained higher-order function which essentially lets you construct a "grab this and do that" function. You can be sure your code flow is exactly as you expect, because nobody else even has a handle on that callback, and that callback doesn't know about anything but the parent function's given parameters. That makes it modular and easy to test. If you now want to download 5 files in parallel and do different things, you need only fire five of these functions off with the appropriate callback functions. In a language with good anonymous function syntax this can be incredibly powerful. Events, on the other hand, are more designed for notifying 1..* users of some state change. If you fire a "download complete" event at the end of download(URL) , which launches processDownload() which knows where to find the data, you're tying your implementations of things to a larger amount of state. How do you parallelise downloads now? How do you handle the different downloads differently? Is download(URL, eventId) elegant? Is downloadAndDoThing(URL) elegant? Hardly. Of course, as with all things, you are making tradeoffs. Nested callbacks can make the order of execution of code more confusing, and the lack of easy global accessability makes it a poor choice for anything where you do have a 1..* relationship between producer and consumer. You have a harder time passing data around, but if you can get away with not having additional state then that's usually a benefit anyway. Regardless. You're coding in node.js where callbacks are idiomatic, and the language and libraries are designed around their use. Whether you see advantages in one design or another, I think it's almost always true that writing idiomatic code in any language is going to have far greater support and make your life much easier than trying to circumvent it. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/219191",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/109474/"
]
} |
219,298 | I would like to understand how a spreadsheet (a group of named or otherwise identified cells containing values or formulas referencing other cells) is solved. I have tried looking at existing projects, but there was so much going on with the GUI, serialization, events, etc. that I couldn't find the spreadsheet. At its simplest how does it work? | At its core, a spreadsheet is a functional language with dynamic typing and each function or value being able to be referenced as a cell in the matrix. Instead of things like (defn some-name ...) the some-name part is placed in a cell itself. If you go to a dynamically updating functional language ide (such as lighttable for clojure), you will see much of the same functionality as a spreadsheet. Bind a value to a name, write a function that uses that value, change the value and the output of the function changes immediately. This is the same as doing something like writing =A1 + B2 in the location of C3 in excel. Thus, functional programmers often like to write spreadsheets as toy programs... and the subject of research papers too. (Yes, I'm sorry, they are all behind an ACM.org paywall) Spreadsheet functional programming The functional programming community has shown some interest in spreadsheets, but surprisingly no one seems to have considered making a standard spreadsheet, such as Excel, work with a standard functional programming language, such as Haskell. In this paper, we show one way that this can be done. Our hope is that by doing so, we might get spreadsheet programmers to give functional programming a try. Forms/3: A first-order visual language to explore the boundaries of the spreadsheet paradigm Although detractors of functional programming sometimes claim that functional programming is too difficult or counter-intuitive for most programmers to understand and use, evidence to the contrary can be found by looking at the popularity of spreadsheets. The spreadsheet paradigm, a first-order subset of the functional programming paradigm, has found wide acceptance among both programmers and end users. Still, there are many limitations with most spreadsheet systems. In this paper, we discuss language features that eliminate several of these limitations without deviating from the first-order, declarative evaluation model. Implementing function spreadsheets A large amount of end-user development is done with spreadsheets. The spreadsheet metaphor is attractive because it is visual and accommodates interactive experimentation, but as observed by Peyton Jones, Blackwell and Burnett, the spreadsheet metaphor does not admit even the most basic abstraction: that of turning an expression into a named function. Hence they proposed a way to define a function in terms of a worksheet with designated input and output cells; we shall call it a function sheet. The start of Spreadsheet at Wikipedia gives some hints as to how to implement one: A spreadsheet is an interactive computer application program for organization and analysis of data in tabular form. Spreadsheets developed as computerized simulations of paper accounting worksheets. The program operates on data represented as cells of an array, organized in rows and columns. Each cell of the array is a model–view–controller element that can contain either numeric or text data, or the results of formulas that automatically calculate and display a value based on the contents of other cells. Building on this from Outline of Model-View-Controller paradigm as expressed in the Java libraries . The author goes on to mention applets (a bit dated, it was written in '93-'96) and mentions his web page which goes to http://csis.pace.edu/~bergin/Java/applets.htm (yes, applets) for the corresponding spreadsheet code http://csis.pace.edu/~bergin/Java/Spreadsheet.java I will point out that the entirety of the spreadsheet, is not that big in this applet 570 lines including documentation. That said, depending on the language, you could probably do it all with just function pointers in a sparse array. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/219298",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/109567/"
]
} |
219,351 | Reading code and discussions pertaining to code, I often see the words "state" and "status" used interchangeably, but the following tendencies seem to exist: When a variable holds a value intended to indicate that something is in a certain state, the name of that variable more often than not contains the word "state", or an abbreviation thereof. However, when the return value of a function serves to indicate some such state, we tend to call that value a "status code"; and when that value is stored in a variable, this variable is commonly named "status" or something similar. In isolation that's all fine I guess, but when the aforementioned variables are actually one and the same, a choice needs to be made involving the perverted intricacies of English language (or human language in general). What is the prevailing coding-standard or convention when it comes to disambiguating between the two? Or should one of those two always be avoided? This english.stackexchange question is also relevant, I suppose. | I like this question. The following is from my head but I think it fits quite well. status is used to describe an outcome of an operation (e.g. success/fail). state is used to describe a stage in a process (e.g. pending/dispatched). I also like this definition: status is a final (resulting) state. It is quite clear when applied to programming. Much less clear when you apply it to natural language. Let's take the examples from the english thread and see if it holds with the most upvoted answer. "What is the current status of this project?" The answer should be "In testing." Well, this might seem to contradict my definitions at a first look but we must realize the context. Probably some supervisor is asking his team about the project and how far they have reached . The point is that the supervisor is interested in the outcome up to now. The fact that there will be something after is just put away because it is not the point of the question. "What is the current state of this project?" The answer should be "On hold for financial analysis." So I think that this very nicely demonstrates an essence of a state. "On hold for financial analysis." clearly focuses on the fact that the current situation is a part of some encompassing process and even suggests the next state. I would say it holds pretty well. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/219351",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/109617/"
]
} |
219,482 | Not sure how to go about this method to reduce Cyclomatic Complexity. Sonar reports 13 whereas 10 is expected. I am sure nothing harm in leaving this method as it is, however, just challenging me how to go about obeying Sonar's rule. Any thoughts would be greatly appreciated. public static long parseTimeValue(String sValue) {
if (sValue == null) {
return 0;
}
try {
long millis;
if (sValue.endsWith("S")) {
millis = new ExtractSecond(sValue).invoke();
} else if (sValue.endsWith("ms")) {
millis = new ExtractMillisecond(sValue).invoke();
} else if (sValue.endsWith("s")) {
millis = new ExtractInSecond(sValue).invoke();
} else if (sValue.endsWith("m")) {
millis = new ExtractInMinute(sValue).invoke();
} else if (sValue.endsWith("H") || sValue.endsWith("h")) {
millis = new ExtractHour(sValue).invoke();
} else if (sValue.endsWith("d")) {
millis = new ExtractDay(sValue).invoke();
} else if (sValue.endsWith("w")) {
millis = new ExtractWeek(sValue).invoke();
} else {
millis = Long.parseLong(sValue);
}
return millis;
} catch (NumberFormatException e) {
LOGGER.warn("Number format exception", e);
}
return 0;
} All ExtractXXX methods are defined as static inner classes. For example, like one below - private static class ExtractHour {
private String sValue;
public ExtractHour(String sValue) {
this.sValue = sValue;
}
public long invoke() {
long millis;
millis = (long) (Double.parseDouble(sValue.substring(0, sValue.length() - 1)) * 60 * 60 * 1000);
return millis;
}
} UPDATE 1 I am going to settle down with a mix of suggestions here to satisfy Sonar guy. Definitely room for improvements and simplification. Guava Function is just a unwanted ceremony here. Wanted to update the question about current status. Nothing is final here. Pour your thoughts please.. public class DurationParse {
private static final Logger LOGGER = LoggerFactory.getLogger(DurationParse.class);
private static final Map<String, Function<String, Long>> MULTIPLIERS;
private static final Pattern STRING_REGEX = Pattern.compile("^(\\d+)\\s*(\\w+)");
static {
MULTIPLIERS = new HashMap<>(7);
MULTIPLIERS.put("S", new Function<String, Long>() {
@Nullable
@Override
public Long apply(@Nullable String input) {
return new ExtractSecond(input).invoke();
}
});
MULTIPLIERS.put("s", new Function<String, Long>() {
@Nullable
@Override
public Long apply(@Nullable String input) {
return new ExtractInSecond(input).invoke();
}
});
MULTIPLIERS.put("ms", new Function<String, Long>() {
@Nullable
@Override
public Long apply(@Nullable String input) {
return new ExtractMillisecond(input).invoke();
}
});
MULTIPLIERS.put("m", new Function<String, Long>() {
@Nullable
@Override
public Long apply(@Nullable String input) {
return new ExtractInMinute(input).invoke();
}
});
MULTIPLIERS.put("H", new Function<String, Long>() {
@Nullable
@Override
public Long apply(@Nullable String input) {
return new ExtractHour(input).invoke();
}
});
MULTIPLIERS.put("d", new Function<String, Long>() {
@Nullable
@Override
public Long apply(@Nullable String input) {
return new ExtractDay(input).invoke();
}
});
MULTIPLIERS.put("w", new Function<String, Long>() {
@Nullable
@Override
public Long apply(@Nullable String input) {
return new ExtractWeek(input).invoke();
}
});
}
public static long parseTimeValue(String sValue) {
if (isNullOrEmpty(sValue)) {
return 0;
}
Matcher matcher = STRING_REGEX.matcher(sValue.trim());
if (!matcher.matches()) {
LOGGER.warn(String.format("%s is invalid duration, assuming 0ms", sValue));
return 0;
}
if (MULTIPLIERS.get(matcher.group(2)) == null) {
LOGGER.warn(String.format("%s is invalid configuration, assuming 0ms", sValue));
return 0;
}
return MULTIPLIERS.get(matcher.group(2)).apply(matcher.group(1));
}
private static class ExtractSecond {
private String sValue;
public ExtractSecond(String sValue) {
this.sValue = sValue;
}
public long invoke() {
long millis;
millis = Long.parseLong(sValue);
return millis;
}
}
private static class ExtractMillisecond {
private String sValue;
public ExtractMillisecond(String sValue) {
this.sValue = sValue;
}
public long invoke() {
long millis;
millis = (long) (Double.parseDouble(sValue));
return millis;
}
}
private static class ExtractInSecond {
private String sValue;
public ExtractInSecond(String sValue) {
this.sValue = sValue;
}
public long invoke() {
long millis;
millis = (long) (Double.parseDouble(sValue) * 1000);
return millis;
}
}
private static class ExtractInMinute {
private String sValue;
public ExtractInMinute(String sValue) {
this.sValue = sValue;
}
public long invoke() {
long millis;
millis = (long) (Double.parseDouble(sValue) * 60 * 1000);
return millis;
}
}
private static class ExtractHour {
private String sValue;
public ExtractHour(String sValue) {
this.sValue = sValue;
}
public long invoke() {
long millis;
millis = (long) (Double.parseDouble(sValue) * 60 * 60 * 1000);
return millis;
}
}
private static class ExtractDay {
private String sValue;
public ExtractDay(String sValue) {
this.sValue = sValue;
}
public long invoke() {
long millis;
millis = (long) (Double.parseDouble(sValue) * 24 * 60 * 60 * 1000);
return millis;
}
}
private static class ExtractWeek {
private String sValue;
public ExtractWeek(String sValue) {
this.sValue = sValue;
}
public long invoke() {
long millis;
millis = (long) (Double.parseDouble(sValue) * 7 * 24 * 60 * 60 * 1000);
return millis;
}
} } UPDATE 2 Though I added my update, it is only that much worth the time. I am going to move on since Sonar now does not complains. Don't worry much and I am accepting the mattnz answer as it is the way to go and don't want to set a bad example for those who bumps on to this question. Bottom line -- Don't over engineer for the sake of Sonar (or Half Baked Project Manager) complains about CC. Just do what's worth a penny for the project. Thanks to all. | Software Engineering Answer: This is just one of the many cases where simply counting beans that are simple to count will make you do the wrong thing. Its not a complex function, don't change it. Cyclomatic Complexity is merely a guide to complexity, and you are using it poorly if you change this function based on it. Its simple, its readable, its maintainable (for now), if it gets bigger in the future the CC will skyrocket exponentially and it will get the attention it needs when it needs needs it, not before. Minion working for a Large Multinational Corporation Answer: Organizations are full of overpaid, unproductive teams of bean counters. Keeping the bean counters happy is easier, and certainly wiser, than doing the right thing. You need to change the routine to get he CC down to 10, but be honest about why you are doing it - to keep the bean counters off your back. As suggested in comments - "monadic parsers" might help | {
"source": [
"https://softwareengineering.stackexchange.com/questions/219482",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/19341/"
]
} |
219,490 | I keep running into Senior JS positions where they want CoffeeScript. The reason I don't use CoffeeScript is that my first impression of it was that it puts limitations on JS OOP features that I find valuable. I've seen claims that the two can inter-mix freely but that sounds dubious to me. Should I even bother applying to these positions if CoffeeScript-only is a dealbreaker for me? | Software Engineering Answer: This is just one of the many cases where simply counting beans that are simple to count will make you do the wrong thing. Its not a complex function, don't change it. Cyclomatic Complexity is merely a guide to complexity, and you are using it poorly if you change this function based on it. Its simple, its readable, its maintainable (for now), if it gets bigger in the future the CC will skyrocket exponentially and it will get the attention it needs when it needs needs it, not before. Minion working for a Large Multinational Corporation Answer: Organizations are full of overpaid, unproductive teams of bean counters. Keeping the bean counters happy is easier, and certainly wiser, than doing the right thing. You need to change the routine to get he CC down to 10, but be honest about why you are doing it - to keep the bean counters off your back. As suggested in comments - "monadic parsers" might help | {
"source": [
"https://softwareengineering.stackexchange.com/questions/219490",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/27161/"
]
} |
219,541 | In A Critique of Common Lisp written by Rodney A. Brooks and Richard P. Gabriel from Stanford in 1984, some design decisions retained by the normalizing committee of Common Lisp are discussed. While most of the discussion remains valid, there are two statements that refer to the technology available at the time and may be false today. These two statements are: Too many costs of the language were dismissed with the admonition that ‘any good
compiler’ can take care of them. No one has yet written—nor is likely to without
tremendous effort—a compiler that does a fraction of the tricks expected of it. As I am a Common Lisp novice, or even an apprentice, I am not able to be more specific than the authors are. They seem to state that a great generality and flexibility has been built into several aspects of the language, which makes writing a good compiler quite difficult. In COMMON LISP a little too much control was
placed on floating-point arithmetic. And certainly, although the correct behavior of a
floating-point-intensive program can be attained, the performance may vary wildly. As far I understand, it seems that writing efficient numerical code in Common Lisp is possible but more challenging than it has to be. This was thirty years ago. How should I regard these statement today if I am willing to write Common Lisp programs for one of the common free software implementations (CLISP, SBCL et al.)? | The paper is interesting in many ways. The most interesting part is this: the authors falsified the paper from 1984 just two years later in 1986 themselves. Brooks and Gabriel developed a highly optimizing Lisp compiler and sold it commercially very successful for several years: Lucid Common Lisp (PDF). Maintenance for this Lisp compiler is still available from LispWorks : it is now called Liquid Common Lisp . The compiler optimizations of Liquid CL are documented in Chapter 3 of the Advanced User's Guide : Optimizing Lisp Programs . Several commercial applications have been written and deployed in Lucid CL. For example in my home town the first public transport information system for the HVV (Hamburger Verkehrsverbund) was deployed using Lucid CL on a SUN SPARCstation. It was available for the public in train stations using a large touch screen and in the call center. Lucid CL was successful because its production mode compiler created fast Common Lisp applications, mainly for Unix / RISC platforms. Brooks and Gabriel are writing about Lucid Common Lisp in 1986: The dynamically retargetable compiler has been shown to be a means by
which ease of compilation for a variety of Lisp implementations can be
performed; a means of porting Lisp systems to a variety of computers;
and a tool for producing high-quality, high-performance code for a
variety of computers from a common source. Thus they had just implemented what the A Critique of Common Lisp claimed to be difficult or impossible. Nowadays the more advanced implementations are doing a lot of optimizations, but the hardware is also 1000+ times faster than what we had in 1984. A VAX 11/780 then had one MIPS (million instructions per second) and a Lisp Machine was also in that range. A Motorola 68000 had a clock rate of 8 MHz. The criticism about floating point performance and generally varying performance is still valid, but this reflects implementors choice. Some implementations are not developed as high-performance compilers. Their main focus could be portability, compactness or something else and thus they have different implementation goals. As a user/developer one is not forced to write portable code and use all of the ten+ currently supported Common Lisp systems. Use the implementation which is best suited to the application problem. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/219541",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/108641/"
]
} |
219,543 | Should a class know about its subclasses? Should a class do something that is specific for a given subclass for instance? My instincts tells me that is a bad design, it seems like an anti-pattern of some sort. | The answer implied by the concept of classes is "no". Either whatever action, data or relation you're handling is part of all subclasses - then it should be handled in the superclass without checking the actual type. Or it applies only to some subclasses - then you'd have to perform run-time type checks to do the right thing, the superclass would have to be changed whenever someone else inherits from it (or it might silently do the wrong thing), changes in the derived classes can break the unchanged superclass, etc. In short, you get a number of bad consequences which are usually bad enough to reject such solutions out of hand. If several of your subclasses do the same thing and you want to avoid code duplication (practically always a good thing), a better solution is to introduce a mid-level class from which all those subclasses can inherit the code. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/219543",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/7020/"
]
} |
219,615 | I was asked about immutable strings in Java. I was tasked with writing a function that concatenated a number of "a"s to a string. What I wrote: public String foo(int n) {
String s = "";
for (int i = 0; i < n; i++) {
s = s + "a"
}
return s;
} I was then asked how many strings this program would generate, assuming garbage collection does not happen.
My thoughts for n=3 was "" "a" "a" "aa" "a" "aaa" "a" Essentially 2 strings are created in each iteration of the loop. However, the answer was n 2 . What strings will be created in memory by this function and why is that way? | I was then asked how many strings this program would generate, assuming garbage collection does not happen. My thoughts for n=3 was (7) Strings 1 ( "" ) and 2 ( "a" ) are the constants in the program, these are not created as part of things but are 'interned' because they are constants the compiler knows about. Read more about this at String interning on Wikipedia. This also removes strings 5 and 7 from the count as they are the same "a" as String #2. This leaves strings #3, #4, and #6. The answer is "3 strings are created for n = 3" using your code. The count of n 2 is obviously wrong because at n=3, this would be 9 and even by your worst case answer, that was only 7. If your non-interned strings was correct, the answer should have been 2n + 1. So, the question of how should you do this? Since the String is immutable , you want a mutable thing - something you can change without creating new objects. That is the StringBuilder . The first thing to look at is the constructors. In this case we know how long the string will be, and there is a constructor StringBuilder(int capacity) which means we allocate exactly as much as we need. Next, "a" doesn't need to be a String , but rather it can be a character 'a' . This has some minor performance boosting when calling append(String) vs append(char) - with the append(String) , the method needs to find out how long the String is and do some work on that. On the other hand, char is always exactly one character long. The code differences can be seen at StringBuilder.append(String) vs StringBuilder.append(char) . Its not something to be too concerned with, but if you're trying to impress the employer it is best to use the best possible practices. So, how does this look when you put it together? public String foo(int n) {
StringBuilder sb = new StringBuilder(n);
for (int i = 0; i < n; i++) {
sb.append('a');
}
return sb.toString();
} One StringBuilder and one String have been created. No extra strings needed to be interned. Write some other simple programs in Eclipse. Install pmd and run it on the code you write. Note what it complains about and fix those things. It would have found the modification of a String with + in a loop, and if you changed that to StringBuilder, it would have maybe found the initial capacity, but it would certainly catch the difference between .append("a") and .append('a') | {
"source": [
"https://softwareengineering.stackexchange.com/questions/219615",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/109864/"
]
} |
219,759 | I currently manage a library which has a lot of public usage, and I had a question about semantic versioning . I want to refactor one fairly important part of the library which is implemented incorrectly - and has always been implemented incorrectly. But doing this would mean changes to the public API, which is a major decision. The change I want to make revolves around how iterators are used. Currently, users have to do this: while ($element = $iterator->next()) {
// ...
} Which is incorrect, at least in PHP's native Iterator interface . I want to replace with this: while ($iterator->valid()) {
$element = $iterator->current();
// ...
$iterator->next();
} which is analogous to: foreach ($iterator as $element) {
// ...
} If you look at Tom's guide to semantic versioning, he clearly states that any changes to the public API (i.e. those which are not backward compatible), should justify a major release. So the library would jump from 1.7.3 to 2.0.0 which, for me, is a step too far. We're only talking about one feature being fixed. I do have plans to eventually release 2.0.0, but I thought this was when you completely rewrote the library and implemented numerous public API changes. Does the introduction of this refactoring warrant a major version release? I really can't see how it does - I feel more comfortable releasing it as 1.8.0 or 1.7.4. Does anybody have some advice? | You hesitate because you don't want to make semantic versioning, you want to make "advertisement supporting versioning". You expect a version number "2.0" to tell the world that you have a bunch of new cool features in your library now, not that you changed the API. That's ok (many software companies and/or developers do that). IMHO you have the following options: stick to semantic versioning and live with the fact that you have to change the version number to 2.0.0 change your versioning scheme to 4 numbers. "1.1.7.3" is your version now, "1.2.0.0" the next one after changing the API, and "2.0.0.0" the first one of the "completely new 2.x product family" make your fix backwards compatible (so don't change the functionality of next , just add the valid and current functions). Then you can use "1.8.0" as the next version number. If you think changing the behaviour of next is really important, do it in 2.0.0. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/219759",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/34752/"
]
} |
219,767 | What kind of problems may I face, if I won't use Software Design Patterns? Can you tell me about the problems of approaching the design using standard object-oriented techniques? | You're missing the point. Design Patterns inherently exist when doing software design, just like structural patterns exist in the world. Even if you don't know the name of things, you'll eventually find that certain physical structures are well suited for certain problems. You'll find that a triangle shape of wood/metal bars/etc is a very stable structure, but only on a plane. You'll find square bricks have certain advantages over round ones... Likewise, certain software structures are in some way unique or optimal. You'll eventually find them and use them regardless if you know the names for them. That's the core of what design patterns are - they're names for these structures that experienced programmers know and use anyways. It gives programmers the ability to communicate much more uniformly and succinctly. It also lets programmers think in the concept of patterns more consciously. So the two key points I'm trying to make: You can't not use design patterns. By not knowing the names for the things you're using, you'll have a difficult time working in a team. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/219767",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/110013/"
]
} |
219,887 | I've recently started looking at Android development. This has brought me back into the world of Java software development. The last time I worked with Java, I'll admit, I didn't understand OOP nearly as much as (I think) I do now. Having mainly used C# in my career, I'm noticing a startling difference in how inheritance is used Java and C#. In C# it seemed like inheritance could be avoided in most situations. The task at hand could usually be accomplished by using concrete classes of the .NET framework. In Java, from what I'm gathering from code samples, it seems like the Java framework supplies many interfaces or abstract classes that are then meant to be implemented/extended by the developer. This seems to be too big a difference to just boil down to style. What is the reasoning behind this? I feel like I won't be writing clean Java code until I understand this. Also, is this limited to just the Android SDK or is this a Java-wide approach to OOP? Or put in another way, What is it about the design of these two languages that (seems to encourage) more or less inheritance use than the other? If the languages treat inheritance identically, and assuming my observation is valid, then it means this is related to the design of the frameworks/libraries and not the languages. What would the motivation be for this kind of design? | This seems to be too big a difference to just boil down to style. What is the reasoning behind this? My understanding is that it largely is simply a stylistic decision. Well, perhaps not style, but the idioms of the language/environment. Java standard library developers followed one set of design guidelines and the .NET developers another (though they had the ability to see how Java's approach worked). There is very little in the actual languages to encourage or dissuade inheritence. Only two things strike me as relevant: .NET introduced generics earlier in their lifetime, before too much non-generic code was implemented. The alternative is a lot of inheritence to type specialize things. A larger change was that .NET supported delegates. In Java you're stuck with (anonymous) inheritence to supply the most basic of variable functionality. This leads to a relatively large difference in how code is designed to either take advantage of delegates or to avoid the awkward inheritance structures needed to do it in Java. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/219887",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/81495/"
]
} |
219,953 | localStorage and indexedDB are used for offline storage of data in HTML5. What are their key differences and which one is preferable in what situations? | On the surface the two technologies may seem directly comparable, however if you spend some time with them you'll soon realize they are not. They were designed to achieve a similar goal, client side storage, but they approach the task at hand from significantly different perspectives and work best with different amounts of data. localStorage, or more accurately Web Storage , was designed for smaller amounts of data. It's essentially a strings only key - value storage , with a simplistic synchronous API. That last part is key. Although there's nothing in the specification that prohibits an asynchronous Web Storage, currently all implementations are synchronous (i.e. blocking requests). Even if you didn't mind using a naive key - value storage for larger amounts of data, your clients will mind waiting forever for your application to load. indexedDB , on the other hand, was designed to work with significantly larger amounts of data. First, in theory, it provides both a synchronous and an asynchronous API. In practice, however, all current implementations are asynchronous, and requests will not block the user interface from loading. Additionally, indexedDB, as the name reveals, provides indexes . You can run rudimentary queries on your database and fetch records by looking up theirs keys in specific key ranges . indexedDB also supports transactions , and provides simple types (e.g. Date). At this point, indexedDB might seem the superior solution for every situation ever. However, there's a penalty for all its features: Compared to Web Storage, its API is quite complicated. indexedDB assumes a general familiarity with database concepts, whereas with Web Storage you can jump right in. If you have ever worked with cookies, you won't have an issue working with Web Storage. Also, in general you'll need to write more code in indexedDB to achieve exactly the same result as in Web Storage (and more code = more bugs). Furthermore, emulating Web Storage for browsers that don't support it is relatively straightforward. With indexedDB, the task wouldn't be worth its time. Lastly, before you dive into indexedDB, you should first take a look at the Quota API . At the end of the day, it's completely up to you if you use Web Storage or indexedDB, or both, in your application. A good use case for Web Storage would be to store simple session data, for example a user's name, and save you some requests to your actual database. indexedDB's additional features, on the other hand, could help you store all the data you need for your application to work offline. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/219953",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/-1/"
]
} |
219,976 | Systems / programs / distributed algorithms / ... are often described with the predicate robust or fault-tolerant . What is the difference? Details: When I google for +robust +"fault-tolerant", I only get two hits, both unhelpful. When I googlescholar for the terms, I find a lot of papers that have both terms in their title. Unfortunately, they do not precisely define the terms :( But since they use both terms, it seems that neither implies the other. | Both describe the consistency of an application's behavior, but "robustness" describes an application's response to its input , while "fault-tolerance" describes an application's response to its environment . An app is robust when it can work consistently with inconsistent data. For example: a maps application is robust when it can parse addresses in various formats with various misspellings and return a useful location. A music player is robust when it can continue decoding an MP3 after encountering a malformed frame. An image editor is robust when it can modify an image with embedded EXIF metadata it might not recognize -- especially if it can make changes to the image without wrecking the EXIF data. An app is fault-tolerant when it can work consistently in an inconsistent environment. A database application is fault-tolerant when it can access an alternate shard when the primary is unavailable. A web application is fault-tolerant when it can continue handling requests from cache even when an API host is unreachable. A storage subsystem is fault-tolerant when it can return results calculated from parity when a disk member is offline. In both cases, the application is expected to remain stable, behave uniformly, preserve data integrity, and deliver useful results even when an error is encountered. But when evaluating robustness, you may find criteria involving data, while when evaluating fault-tolerance, you'll find criteria involving uptime. One doesn't necessarily lead to the other. A mobile voice-recognition app can be very robust, providing an uncanny ability to recognize speech consistently in a variety of regional accents with huge amounts of background noise. But if it's useless without a fast cellular data connection, it's not very fault-tolerant. Similarly, a web publishing application can be immensely fault-tolerant, with multiple redundancies at every level, capable of losing whole data centers without failing, but if it drops a user table and crashes the first time someone registers with an apostrophe in their last name, it's not robust at all. If you're looking for scholarly literature to help describe the distinction, you might look in specific domains that make use of software, rather than broadly software in general. Distributed applications research might be fertile ground for fault-tolerance criteria, and Google has published some of their research that might be relevant. Data modeling research likely addresses questions of robustness, as scientists are particularly interested in the properties of robustness that yield reproducible results. You can probably find papers describing statistical applications that might be helpful, as in climate modeling, RF propagation modeling, or genome sequencing. You'll also find engineers discussing "robust design" in things like control systems. The Google File System whitepaper describes their approach to fault-tolerance problems, which generally involves the assumptions that component failures are routine and so the application must adapt to them: http://static.googleusercontent.com/media/research.google.com/en/us/archive/gfs-sosp2003.pdf ("Our system provides fault tolerance by constant monitoring, replicating crucial data, and fast and automatic recovery.") This project for a class at Rutgers supports a "component-failure" oriented definition of "fault tolerance": http://www.ece.rutgers.edu/~parashar/Classes/03-04/ece572/papers/cristian93understanding.pdf ("Some systems are designed to be fault-tolerant: they either exhibit a well-defined failure behavior when components fail or mask component failures to users…") There are loads of papers on "robust modeling XYZ", depending on the field you investigate. Most will describe their criteria for "robust" in the abstract, and you'll find it all has to do with how the model deals with input. This brief from a NASA climate scientist describes robustness as a criteria for evaluating climate models: http://www.giss.nasa.gov/research/briefs/schmidt_04/ ("…robust in that it does not depend significantly on the specifics of parameterization and spatial representation.") This paper from an MIT researcher examines wireless protocol applications, a domain in which fault-tolerance and robustness overlap, but the authors use "robust" to describe applications, protocols, and algorithms, while they use "fault-tolerance" in reference to topology and components: http://people.csail.mit.edu/grishac/papers/allerton.pdf ("In short, these uses require robust applications that always operate correctly, even under unpredictable conditions.") | {
"source": [
"https://softwareengineering.stackexchange.com/questions/219976",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/29043/"
]
} |
220,048 | Many web frameworks, such as Flask or Django use SQLite as their default database. SQLite is compelling because it's included in python, and administrative overhead is pretty low. However, most high traffic public production sites wind up using a heavier database: mySQL, Oracle, or postgresql. The questions : Assume: Site traffic is moderate, and concurrent read/write access to the database will happen We will use SQLAlchemy with SQLite write locks (although this comment makes me a little nervous) The database will contain perhaps 60,000 records Data structures do not require advanced features found in heavier databases Is there ever a compelling case against SQLite concurrency for websites that serve as moderate-traffic internal corporate tools? If so, what conditions will cause SQLite to have concurrency problems? I'm looking for known specific root causes, instead of general fear / unsubstantiated finger pointing. | I recommend reading the official answer to your question, Appropriate Uses For SQLite . Specifically, the "Situations Where Another RDBMS May Work Better" warns that SQLite does not support concurrent writing: SQLite supports an unlimited number of simultaneous readers, but it
will only allow one writer at any instant in time. For many
situations, this is not a problem. Each application does its database
work quickly and moves on, and no lock lasts for more than a few dozen
milliseconds. But there are some applications that require more
concurrency, and those applications may need to seek a different
solution. From an appropriateness perspective, I tend to view SQLite as a very sophisticated file format that supports SQL queries. I would tend to avoid SQLite if I wanted to separate my database from my web application, as it is not optimized for this case. In short, SQLite is insufficiently scalable for use in some scenarios, so people running websites who hope to someday become popular may be better off starting with something scalable, rather than going with SQLite and later being forced to switch. All that being said, SQLite is probably fine for most internal websites; typically internal websites don't require the same level of concurrency and scalability. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/220048",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/23144/"
]
} |
220,049 | I was wondering how much data should be logged. I know this deeply depends on multiple factors. But it can still be hard to find the golden middle way. Lets say I have an application where people can create and administrate a user. Furthermore they are able to create/read/update/delete other objects within the application. If the application has many users it can be an issue if too much data is logged. If all DTO classes and their contents is logged everytime a user or any other object is edited/delete/etc. the logs could get extremely large. But if you only log a little line saying "User 'blabla' was created/edited/etc." you might have trouble debugging or recreating bugs as you dont have the exact state of the objects when the error happened. Where I currently work we rarely log data, but instead log error messages and stack traces, but I've also previously worked at places where they logged every single bit of information that can be useful in a future debugging scenario. I was wondering what other people do and think about this? How much data do you log for future debugging information? - is there such a thing as a "correct" amount of data logged? | I recommend reading the official answer to your question, Appropriate Uses For SQLite . Specifically, the "Situations Where Another RDBMS May Work Better" warns that SQLite does not support concurrent writing: SQLite supports an unlimited number of simultaneous readers, but it
will only allow one writer at any instant in time. For many
situations, this is not a problem. Each application does its database
work quickly and moves on, and no lock lasts for more than a few dozen
milliseconds. But there are some applications that require more
concurrency, and those applications may need to seek a different
solution. From an appropriateness perspective, I tend to view SQLite as a very sophisticated file format that supports SQL queries. I would tend to avoid SQLite if I wanted to separate my database from my web application, as it is not optimized for this case. In short, SQLite is insufficiently scalable for use in some scenarios, so people running websites who hope to someday become popular may be better off starting with something scalable, rather than going with SQLite and later being forced to switch. All that being said, SQLite is probably fine for most internal websites; typically internal websites don't require the same level of concurrency and scalability. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/220049",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/110306/"
]
} |
220,091 | I am working on a simple API that I want to use for my own client, and to open to the public in the future.
I have "Item" objects which can have different "types". The type is a C "typedef enum", for the moment I have : typedef enum {
ItemTypeBool,
ItemTypeNumber,
ItemTypeDate,
} ItemType; (I may add some in the future) I am wondering if I should rather transfer it as integers or as defined "strings". The JSON would be : For integers : {
"name": "The name",
"type": 0,
...
} For strings : {
"name": "The name"
"type": "boolean"
...
} I'm wondering if there's a best practice for this. Keeping the integer would slightly simplify the code, and reduce the bandwidth, but strings would be easier for developers to remember. I remember I worked on a project, and I had to remember 1=image, 2=audio, 3=html,... which doesn't make any real sense. So I'm asking you, if you know any other aspect I should consider. | Provide the strings. Numbers are meaningless. You don't use them in your own code, right (you're wrapping enum values around, that are basically strings) - why punish the user with having to use these numbers? The only pro if you do expose the numbers - easier for you to parse these. But hey, who cares about you. Take care of the API clients. If you provide the strings - easier for the clients; won't ever have to say things like "4 had been deprecated in favor of 17"; slightly harder parsing on your behalf, but that's fine. Do not provide both: as a user, i'm left to wonder which one do I use? Both? [on to reading docs] why are there two ways to say the same thing? are they subtly different? [on to reading docs] what if I specify both and there's a mismatch? will it complain? will one take precedence? which one? [on to reading docs] As you can see, you're having me read a lot of docs for no reason. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/220091",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/88917/"
]
} |
220,230 | I keep wondering if it is legitimate to use verbs that are based on nouns in OOP. I came across this brilliant article , though I still disagree with the point it makes. To explain the problem a bit more, the article states that there shouldn't be, for instance, a FileWriter class, but since writing is an action it should be a method of the class File . You'll get to realize that it's often language dependent since a Ruby programmer would likely be against the use of a FileWriter class (Ruby uses method File.open to access a file), whereas a Java programmer wouldn't. My personal (and yes, very humble) point of view is that doing so would break the Single Responsibility principle. When I programmed in PHP (because PHP is obviously the best language for OOP, right?), I would often use this kind of framework: <?php
// This is just an example that I just made on the fly, may contain errors
class User extends Record {
protected $name;
public function __construct($name) {
$this->name = $name;
}
}
class UserDataHandler extends DataHandler /* knows the pdo object */ {
public function find($id) {
$query = $this->db->prepare('SELECT ' . $this->getFields . ' FROM users WHERE id = :id');
$query->bindParam(':id', $id, PDO::PARAM_INT);
$query->setFetchMode( PDO::FETCH_CLASS, 'user');
$query->execute();
return $query->fetch( PDO::FETCH_CLASS );
}
}
?> It is my understanding that the suffix DataHandler doesn't add anything relevant ; but the point is that the single responsibility principle dictates us that an object used as a model containing data (may it be called a Record) shouldn't also have the responsibility of doing SQL queries and DataBase access. This somehow invalidates the ActionRecord pattern used for instance by Ruby on Rails. I came across this C# code (yay, fourth object language used in this post) just the other day: byte[] bytes = Encoding.Default.GetBytes(myString);
myString = Encoding.UTF8.GetString(bytes); And I gotta say that it doesn't make much sense to me that an Encoding or Charset class actually encodes strings. It should merely be a representation of what an encoding really is. Thus, I would tend to think that: It is not a File class responsibility to open, read or save files. It is not a Xml class responsibility to serialize itself. It is not a User class responsibility to query a database. etc. However, if we extrapolate these ideas, why would Object have a toString class? It's not a Car's or a Dog's responsibility to convert itself to a string, now is it? I understand that from a pragmatic point of view, getting rid of the toString method for the beauty of following a strict SOLID form, that makes code more maintainable by making it useless, is not an acceptable option. I also understand that there may not be an exact answer (which would more be an essay than a serious answer) to this, or that it may be opinion-based. Nevertheless I would still like to know if my approach actually follows what the single-responsibility principle really is. What's a class's responsibility? | Given some divergences between languages, this can be a tricky topic. Thus, I'm formulating the following commentaries in a way that tries to be as comprehensive as I can inside the realm of OO. First of all, the so called "Single Responsibility Principle" is a reflex -- explicitly declared -- of the concept cohesion . Reading the literature of the time (around '70), people were (and still are) struggling to define what a module is, and how to construct them in a way that would preserve nice properties. So, they would say "here is a bunch of structures and procedures, I'll make a module out of them", but with no criteria as to why this set of arbitrary things are packaged together, the organization might end up making little sense -- little "cohesion". Hence, discussions on criteria emerged. So, the first thing to note here is that, so far, the debate is around organization and related effects on maintenance and understandability (for little matter to a computer if a module "makes sense"). Then, someone else (mr. Martin) came in and applied the same thinking to the unit of a class as a criteria to use when thinking about what should or should not belong to it, promoting this criteria to a principle, the one being discussed here. The point he made was that "A class should have only one reason to change" . Well, we know from experience that many objects (and many classes) that appear to do "many things" have a very good reason for doing so. The undesirable case would be the classes that are bloated with functionality to the point of being impenetrable to maintenance, etc. And to understand the latter is to see where mr. Martin was aiming at when he elaborated on the subject. Of course, after reading what mr. Martin wrote, it should be clear these are criteria for direction and design to avoid problematic scenarios, not in any way to pursue any kind of compliance, let alone strong compliance, specially when "responsibility" is ill defined (and questions like "does this violates the principle?" are perfect examples of the widespread confusion). Thus, I find it unfortunate it is called a principle , misleading people into try to take it to the last consequences, where it would do no good. Mr. Martin himself discussed designs that "do more than one thing" that should probably be kept that way, since separating would yield worse results. Also, there are many known challenges regarding modularity (and this subject is a case of it), we are not at a point of having good answers even for some simple questions about it. However, if we extrapolate these ideas, why would Object have a toString class? It's not a Car's or a Dog's responsibility to convert itself to a string, now is it? Now, let me pause to say something here about toString : there is a fundamental thing commonly neglected when one makes that transition of thought from modules to classes and reflect on what methods should belong to a class. And the thing is dynamic dispatch (aka, late binding, "polymorphism"). In a world with no "overriding methods", choosing between "obj.toString()" or "toString(obj)" is a matter of syntax preference alone. However, in a world where programmers can change the behavior of a program by adding a subclass with distinct implementation of an existing/overridden method, this choice is no more of taste: making a procedure a method also can make it a candidate for overriding, and the same might not be true for "free procedures" (languages that support multi-methods have a way out of this dichotomy). Consequently, it is no more a discussion on organization only, but on semantics as well. Finally, to which class the method is bound, also becomes an impacting decision (and in many cases, so far, we have little more than guidelines to help us decide where things belong, as non-obvious trade-offs emerge from different choices). Finally, we are faced with languages that carry terrible design decisions, for instance, forcing one to create a class for every little bit of thing. Thus, what once was the canonical reason and main criteria for having objects (and in the class-land, therefore, classes) at all, which is, to have these "objects" that are kind of "behaviors that also behave like data", but protect their concrete representation (if any) from direct manipulation at all costs (and that's the main hint for what should be the interface of an object, from the point of view of its clients), gets blurred and confused. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/220230",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/110080/"
]
} |
220,254 | I am making a hybrid Android app. At first I decided to use localStorage, after spending 2 days, I realized that it is very strange and so dropped it. Then, I picked up indexedDB, after spending today's whole day and actually getting the output in Google Chrome, it is not running inside a WebView of the android app. And I never used Web SQL database at all because it was deprecated. Anyhow, it has come to my notice that PhoneGap still uses Web SQL and android's browsers support it. Why was Web SQL deprecated in the first place? And will it be a good idea for me to go with Web SQL now? | Short version: Web SQL was deprecated because standards are really important and turning Web SQL into a proper standard would have been prohibitively difficult. Since existing implementations of Web SQL are basically wrappers around SQLite, any attempt to define a standard of it was basically "do what SQLite does." This isn't good enough; a true standard needs to be self-contained, to define the interface and corner cases and exceptions itself instead of pointing to an existing implementation (especially a third-party implementation like SQLite). Otherwise, you run the risk of taking one particular implementation's quirks and enshrining them as the standard. From what I've read, the W3C prefers multiple independent implementations of proposed standards to help ensure that this happens; since Web SQL was so tied to SQLite, that just wasn't going to happen. Mozilla's blog gives more details on their reasoning in particular for not supporting Web SQL; apparently they were one of the major voices in getting Web SQL deprecated. Should you go with Web SQL now? I don't expect the vendors that currently support it (like Google and Apple) to drop it any time soon, but IE and Firefox won't be adding it, and since it's deprecated, why invest in it? (For example, Ido Green , with Google Developer Relations, doesn't recommend using it.) | {
"source": [
"https://softwareengineering.stackexchange.com/questions/220254",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/-1/"
]
} |
220,296 | I'm writing a Java implementation of a card game, so I created a special type of Collection I'm calling a Zone. All modification methods of Java's Collection are unsupported, but there's a method in the Zone API, move(Zone, Card) , which moves a Card from the given Zone to itself (accomplished by package-private techniques). This way, I can ensure that no cards are taken out of a zone and simply vanish; they can only be moved to another zone. My question is, how necessary is this kind of defensive coding? It's "correct," and it feels like the right practice, but it's not like the Zone API is ever going to be part of some public library. It's just for me, so it's kind of like I'm protecting my code from myself when I could probably be more efficient by just using standard Collections. How far should I take this Zone idea? Can anyone give me some advice on how much I should think about preserving the contracts in classes I write, especially for ones that aren't really going to be publicly available? | I'm not going to address the design problem - just the question of whether to do things "correctly" in a non-public API. it's just for me, so it's kind of like I'm protecting my own code from myself That's exactly the point. Maybe there's coders out there who remember the nuances of every class and method they ever wrote and never mistakenly call into them with the wrong contract. I'm not one of them. I often forget how code I wrote is supposed to work within hours of writing it. After you think you've gotten it right once, your mind will tend to switch gears to the problem you're working on now . You have tools to combat that. These tools include (in no particular order) conventions, unit tests and other automated tests, precondition checking, and documentation. I myself have found unit tests to be invaluable because they both force you to think about how your contract will be used and provide documentation later on how the interface was designed. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/220296",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/110560/"
]
} |
220,374 | I am a junior developer and have only been in the industry for 5 years. At my current company there is a senior let's call him Infestus. Occasionally I am being given opportunity to shine and do something completely brand new from scratch. One of the most recent examples was that I had to make a singleton in the multithreaded application. I have decided to use this method. As soon as Infestus saw it, he quickly proceeded to call me stupid and told me to use this approach . Upon asking him why he just brushed it off as this is better and that's how this and this book about Java says it is better. And it is a common pattern: whenever I get a chance to do something new, I quickly get shot down by Infestus and the only reasoning why his method is better is because those books were written by famous programmers. He is always trying to give me books to read so that I may "learn" which ways to program. I have only been programming for money for 5 years, but is it always a good idea to just blindly follow the book on best ways for solving a problem, or should I try experimenting every now and then? The constant barrage of complaints from the Infestus is starting to cause me to never try anything new and follow examples in books. EDIT : I am utterly lost. Yes I know that following anything blindly is a bad idea. But this godlike programmer Infestus who seems to know a lot, tells me that the only way to program properly is by reading books and following everything down to a T. All the rules he imposes are the ones written in books, so I am just wondering if books are the only correct way. EDIT2 : Infestus is not my boss. He is just one of senior developers in charge of reviewing the code. And most of his comments after reviews consist of book names where such and such method is wrong. | You are going to run into programmers like this your entire career. There is nothing wrong with experimentation and learning on your own. Sure books are great. Many times the examples work in a clean environment, but if you are developer for another company there is no such thing as a clean (without interference from others) environment. It's always nice to know how to do things the "right" way, but opinions change year to year. So learn what you can. Take what you can from the senior developer, blend it with your knowledge that you learn on your own. Eventually, you will be a senior developer and will be taking from these experiences and teaching junior devs. Just don't be a jerk about it. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/220374",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/41918/"
]
} |
220,574 | Should user permission checks take place in the model or the controller? And who should handle the permission checks, the User object or some UserManagement helper? Where should it happen? Checking in the Controller: class MyController {
void performSomeAction() {
if (user.hasRightPermissions()) {
model.someAction();
}
}
... Having the checks in the Controller helps making the Models simple actions, so we can keep all logic to the Controllers. Checking in the Model: class MyModel {
void someAction() {
if (user.hasRightPermissions()) {
...
}
}
... By putting the checks in the Model, we complicate the Model, but also make sure we don't accidentally allow users to do stuff they aren't supposed to in the Controller. And by who? Once we've settled on the place, who should do the checks? The user? Class User {
bool hasPermissions(int permissionMask) {
...
}
... But it's not really the user's responsibility to know what he or she can acccess, so perhaps some helper class? Class UserManagement {
bool hasPermissions(User user, int permissionMask) {
...
}
... I know it's common to ask just a single question in, well, a question, but I think these can be answered nicely together. | As usual, "it depends" permission checks will functionally work anywhere it's convenient to put them, but if you're asking a technical question then the answer may be 'put the checks in the object that owns the data required to perform the check' (which is probably the controller). but if you're asking a philosophical question, I suggest an alternate answer: don't show users actions that they are not permitted to perform . So in the latter case you might have the permission check in the controller implemented as a boolean property, and bind that property to the Visible property of the button or panel in the user-interface that controls the action as a user, it's frustrating to see buttons for actions that I cannot perform; feels like I'm being left out of the fun ;) | {
"source": [
"https://softwareengineering.stackexchange.com/questions/220574",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/7719/"
]
} |
220,822 | A lot of languages like Java and C# have garbage collectors that free memory when that memory no longer has any reference. Yet they don't immediately free it after the reference counter hits zero but instead every once in a while they check on all the memory to see if that memory has any reference and delete it if it doesn't. What is the benefit of doing that way? The downside to doing it that way is that you lose the destructor as you can't guarantee when it will be called. I would imagine that it is done that way because of performance, but has there been any study that shows that a garbage collector that works like that has a better performance then std::shared_ptr found in C++? | Because in order to free memory as soon as the reference counter hits zero, you have to keep a reference counter. And that doesn't come for free. Typically, it limits your throughput. There are generally two major strategies for implementing garbage collectors: tracing collectors and reference counting collectors. (There are others, but those are the ones in use by most mainstream automatic memory management systems.) Typically, reference counting GCs tend to have worse throughput but better (and more predictable) latency than tracing collectors whereas tracing collectors have better throughput but higher and less predictable latency. Another big problem with (at least with simple implementations of) reference counting garbage collectors is that they can't garbage collect cycles. You typically need to run a tracing collector in conjunction with a reference counting collector anyway (that's what CPython does, for example). Practically speaking, all modern industrial-strength high-performance automatic memory management systems (all of the collectors in Oracle JDK, Oracle JRockit and most other JVMs, Microsoft CLR, Mono, most ECMAScript implementations, all Ruby implementations, almost all Python implementations, all Smalltalk implementations, all Lisp implementations etc.) are tracing collectors, so there is a bit of a self-reinforcing feedback loop here: more money gets put into research on tracing GCs because they are popular, and they become more popular because they get better because of the money spent on their research … and so on. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/220822",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/111162/"
]
} |
220,888 | if(condition1)
{
Statement1A;
Statement1B;
}
else if(condition2)
{
Statement2;
}
else if(condition3)
{
Statement3;
}
else
{
Statement1A;
Statement1B;
}
return; I would like to refactor that code so that I do not duplicate Statements. I always need to check condition1 before any other condition. (So I cannot just change the order of the conditions). I also do not want to write &&!condition1 after every other condition. I solved it like this if(condition1)
{
}
else if(condition2)
{
Statement2;
return;
}
else if(condition3)
{
Statement3;
return;
}
Statement1A;
Statement1B;
return; However I do not think an empty if condition will be easily understandable by others (even by me after a while). What is a better approach? | notCondition2And3 = !condition2 & !condition3;
// in place of notCondition2And3 should be some meaningful name
// representing what it actually MEANS that neither condition2 nor condition3 were true And now: if (condition1 || notCondition2And3)
{
Statement1A;
Statement1B;
return;
}
if (condition2)
{
Statement2;
return;
}
if (condition3)
{
Statement3;
return;
} As I wrote in my comment to Kieveli's answer , I see nothing wrong about multiple return s in a method, if there is no memory management considerations (as it might be the case in C or C++ where you have to release all resources manually before you leave). Or another approach still. Here's the decision matrix so that we don't mess it up: F F F - 1
---------
F F T - 3
---------
F T F - 2
F T T - 2
---------
T F F - 1
T F T - 1
T T F - 1
T T T - 1 T s and F s represent the values of condition1 , condition2 and condition3 (respectively). The numbers represent the outcomes. It makes it clear that it's also possible to write the code as: if (!condition1 && condition2) // outcome 2 only possible for FTF or FTT, condition3 irrelevant
{
Statement2;
return;
}
if (!condition1 && !condition2 && condition3) // outcome 3 only when FFT
{
Statement3;
return;
}
// and for the remaining 5 combinations...
Statement1A;
Statement1B; Now if we extracted !condition1 (which is present in both ifs ), we would get: if (!condition1)
{
if (condition2) // outcome 2 only possible for FTF or FTT, condition3 irrelevant
{
Statement2;
return;
}
if (condition3) // outcome 3 only when FFT
{
Statement3;
return;
}
}
// and for the remaining 5 combinations...
Statement1A;
Statement1B; Which is almost exactly what Kieveli suggested, only his disdain for early return s caused his implementation to be buggy (as he noted himself), because it wouldn't do a thing if all 3 conditions were false. Or, we could revert it like so (this probably wouldn't work in every language - it works in C#, for one, since C# allows for equality comparison between multiple variables), now we're virtually back to the first one: // note that "condition1" on the right side of || is actually redundant and can be removed,
// because if the expression on the right side of || gets evaluated at all,
// it means that condition1 must have been false anyway:
if (condition1 || (condition1 == condition2 == condition3 == false)) // takes care of all 4 x T** plus FFF (the one special case).
{
Statement1A;
Statement1B;
return;
}
// and now it's nice and clean
if (condition2)
{
Statement2;
return; // or "else" if you prefer
}
if (condition3)
{
Statement3;
return; // if necessary
} | {
"source": [
"https://softwareengineering.stackexchange.com/questions/220888",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/94827/"
]
} |
220,909 | I have been working with SpringMVC, Hibernate, and some databases in a java web application example. There are a few different ones that do this, but this Spring 3 and hibernate integration tutorial with example has a model class, view (in jsp), and a service and dao classes for the controller. My question is, don't both the service and DAO classes do the same thing? Why would you need them both? This was the tutorial I was actually using: http://fruzenshtein.com/spring-mvc-security-mysql-hibernate/ | Generally the DAO is as light as possible and exists solely to provide a connection to the DB, sometimes abstracted so different DB backends can be used. The service layer is there to provide logic to operate on the data sent to and from the DAO and the client. Very often these 2 pieces will be bundled together into the same module, and occasionally into the same code, but you'll still see them as distinct logical entities. Another reason is security - If you provide a service layer that has no relation to the DB, then is it more difficult to gain access to the DB from the client except through the service. If the DB cannot be accessed directly from the client (and there is no trivial DAO module acting as the service) then all an attacker who has taken over the client can do is attempt to hack the service layer as well before he gets all but the most sanitised access to your data. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/220909",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/67306/"
]
} |
220,929 | is there a good pattern for how to send multiple calls to a web service but without taxing it and ensuring the data is sent back? I don't know enough to correctly describe the problem to even start googling it properly - current google results compare streaming vs. non-streaming wcf answers? Scenario: I am working on an app (I'm a jr. dev.) that needs to gather information from several sources about a 'customer' and the domains they own Technical:
for one of the sources, I need to send a string array of domains to a web service and this web service returns an entry for each domain name, but this list of domain names will be thousands long - I would like to attempt to divide this list into bite-size chunks (1k domain names each) and then... queue them up to send to that web service, but ensure the web service doesn't skip one PseudoRequirements:
Consumer of web page does not care how long it takes, but would like a list of results up front that does not require pagination to navigate. Current Theory:
Should I take my massive 30k list, break it into 1k chunks, stuff each 1k-sized chunk into a 'request' object, assemble those 'request chunks' into a 'request chunk list' and iterate over that list (sequentially / blocking, so I don't strangle the WS) and for each 'request chunk' get back a 'response chunk' assemble those into a list, and then pass that list back into the front end for viewing? is this a viable method? is there a better way to queue items? Does anyone know off-hand any useful articles for this sort of queuing? are there any 'gotchas' or additional items to consider before I attempt my first pass? Additional Edits:
-I do not have full control to the receiving service, I can not view it's code and the developers that manage it are... less than responsive to email. I do not currently know the stress testing limits of the web service. I emailed the owners of that component but have yet to receive a response - I was going to work up my design while I waited on them. | Generally the DAO is as light as possible and exists solely to provide a connection to the DB, sometimes abstracted so different DB backends can be used. The service layer is there to provide logic to operate on the data sent to and from the DAO and the client. Very often these 2 pieces will be bundled together into the same module, and occasionally into the same code, but you'll still see them as distinct logical entities. Another reason is security - If you provide a service layer that has no relation to the DB, then is it more difficult to gain access to the DB from the client except through the service. If the DB cannot be accessed directly from the client (and there is no trivial DAO module acting as the service) then all an attacker who has taken over the client can do is attempt to hack the service layer as well before he gets all but the most sanitised access to your data. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/220929",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/111276/"
]
} |
220,950 | Say I want some parts of my software to be encrypted. For example, the credentials for a database, etc. I need to store those values somewhere, but doing so in cleartext would make it easy for an attacker to gain unauthorised access. However, if I encrypt some cleartext, then where do I store the key? Anything that software has access to, a determined attacker would have access to, no matter what level of obfuscation: Say the key is protected by the filesystem's security model; but what about (malicious) superusers, or platforms that don't provide such fidelity? Or the key is hardcoded into software binaries, but it could always be decompiled and what about open source software or interpreted code? If the key is generated, such an algorithm would need to be deterministic (presumably) and then the same problem applies to the seed. etc. Cryptography is only as strong as the weakest link in its chain and this seems like a pretty loose one! Presuming it's the right tool for the job (humour me), then how can one secure such information robustly? Regarding the right tool for the job: Probably, in -- for example -- the case of service access (DBs, authentication servers, etc.), you would restrict access at this tier with a service account, maybe with some service-level auditing, etc. and so having the credentials in cleartext isn't such a worry. To me, however, that still seems inadequate: I don't want anyone poking around where they shouldn't be! | First of all, I would not refer to myself as a security expert, but I have been in the position of having to answer this question. What I found out surprised me a bit: There is no such thing as a completely secure system . Well, I guess a completely secure system would be one where the servers are all turned off :) Someone working with me at the time described designing a secure system in terms of raising the bar to intruders. So, each layer of securing decreases the opportunity for an attack. For example, even if you could perfectly secure the private key, the system is not completely secure. But, correctly using the security algorithms and being up to date with patches raises the bar. But, yes, a super computer powerful enough and given enough time can break encryption. I'm sure all of this is understood, so I'll get back the question. The question is clear so I'll first try to address each of your points: Say the key is protected by the filesystem's security model; but what
about (malicious) superusers, or platforms that don't provide such
fidelity? Yes, if you use something like Windows Key Store or a password encrypted TLS private key you are exposed to the users that have the password (or access) to the private keys. But, I think you will agree that raises the bar. The file system ACLs (if implemented properly) provide a pretty good level of protection. And you are in the position to personally vet and know your super users. Or the key is hardcoded into software binaries, but it could always be
decompiled and what about open source software or interpreted code? Yes, I've seen hardcoded keys in binaries. Again, this does raise the bar a bit. Someone attacking this system (if it is Java) has to understand that Java produces byte code (etc) and must understand how to decompile it are read it. If you are using a language that writes directly to machine code, you can see that this raises the bar a bit higher. It is not an ideal security solution, but could provide some level of protection. If the key is generated, such an algorithm would need to be
deterministic (presumably) and then the same problem applies to the
seed. Yes, essentially then the algorithm becomes the private key information for creating the private key. So, it would need to now be protected. So, I think you have identified a core issue with any security policy, key management . Having a key management policy in place is central to providing a secure system. And, it is a pretty broad topic . So, the question is, how secure does your system (and, therefore the private key) need to be? How high, in your system, does the bar need to be raised? Now, if you willing to pay, there are some people out there that produce solutions to this. We ended up using an HSM (Hardware Security Module) . It is basically a tamper-proof server that contains a key in hardware. This key can then be used to create other keys used for encryption. The idea here is that (if configured correctly), the key never leaves the HSM. HSMs cost a lot . But in some businesses (protecting credit card data lets say), the cost of a breach is much higher. So, there is a balance. Many HSMs use key cards from maintenance and admin of the features. A quorum of key cards (5 of 9 lets say) have to be physically put into the server in order to change a key. So, this raises the bar pretty high by only allowing a breach if a quorum of super users collude. There may be software solutions out there that provide similar features to an HSM but I'm not aware of what they are. I know this only goes some way to answering the question, but I hope this helps. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/220950",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/34493/"
]
} |
220,980 | I'm working on my own application and I'm stuck. I have to implement a feature but I can't find a good approach to implement this feature. I was thinking about it for a couple of days, and no good thoughts came. Searching the Internet didn't give me any inspiration. I need to move on, but I want to know, what's the best: Think more, wait more, and keep on searching for the best approach Stop wasting time and start with poor design, covering everything with tests What do you think? As I said before, I'm working on my own application. I don't have any deadlines, but I also want to finish coding the app asap. | Apart from talking to people about it (question suggests you don't have colleagues on the project), I often find it a good approach to focus on the things I can do. Usually there is some part of the code that I know I must write anyhow. The stuff I don't know how to write yet, is then replaced by stubs that either return dummy results, or use an approximation that is good enough to test the rest. This keeps you productive. And by the time you need to implement the missing piece, you have the interface. And you have written a lot of code surrounding the problem, in the same problem domain, which usually helps me to generate ideas: you know more exactly what you are required to output, and what other inputs are available if it helps to solve the problem. Also, often the conclusion is that the missing piece does not need to be as all-encompassing as initially thought. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/220980",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/21974/"
]
} |
221,034 | This is somewhat controversial topic, and I guess there is as many opinions as there are programmers. But for the sake of it, I want to know what are the common practices in business (or in your work places). In my work place we have a strict coding guidelines. One section of that is dedicated to magic strings/numbers. It states (for C#): Do not use literal values, either numeric or strings, in your code other than to define symbolic constants. Use
the following pattern to define constants: public class Whatever
{
public static readonly Color PapayaWhip = new Color(0xFFEFD5);
public const int MaxNumberOfWheels = 18;
} There are exceptions: the values 0, 1 and null can nearly always be used safely. Very often the values 2 and
-1 are OK as well. Strings intended for logging or tracing are exempt from this rule. Literals are allowed
when their meaning is clear from the context, and not subject to future changes. mean = (a + b) / 2; // okay
WaitMilliseconds(waitTimeInSeconds * 1000); // clear enough An ideal situation would be some official research paper showing effects on readability/maintainability of the code when: Magic numbers/strings are all over the place Magic strings/numbers are replaced by constant declarations reasonably (or in different degrees of coverage) - and please don't shout at me for using "reasonably", I know everyone has different idea what "reasonably" is Magic strings/numbers arereplaced in excess and in places where they wouldn't have to be (see my example below) I would like to do this to have some scientificaly-based arguments when arguing with one of my collegues, who is going to the point of declaring constants like: private const char SemiColon = ';';
private const char Space = ' ';
private const int NumberTen = 10; Another example would be (and this one is in JavaScript): var someNumericDisplay = new NumericDisplay("#Div_ID_Here"); Do you stick DOM IDs on top of your javascript file if that ID is used only in 1 place? I have read the following topics: StackExchange StackOverflow Bytes IT Community There is many more articles, and after reading these some patterns emerge. So my question is should be using magic strings and numbers in our code? I am specifically looking for expert answers that are backed by references if possible. | ... when arguing with one of my collegues, who is going to the point of
declaring constants like: private const char SemiColon = ';';
private const char Space = ' ';
private const int NumberTen = 10; The argument you need to be making with your colleague isn't about naming a literal space as Space but his poor choice of name for his constants. Let's say your code's job is to parse a stream of records which contain fields separated by semicolons ( a;b;c ) and are themselves separated by spaces ( a;b;c d;e;f ). If whoever wrote your spec calls you up a month from now and says, "we were mistaken, the fields in the records are separated by pipe symbols ( a|b|c d|e|f )," what do you do? Under the value-as-name scheme your colleague prefers, you'd have to change the value of the literal ( SemiColon = '|' ) and live with code that continues to use SemiColon for something that isn't really a semicolon anymore. That will lead to negative comments in code reviews . To abate that, you could change the name of the literal to PipeSymbol and go through and change every occurrence of SemiColon to PipeSymbol . At that rate you might as well have just used a literal semicolon ( ';' ) in the first place, because you'll have to evaluate each use of it individually and you'll be making the same number of changes. Identifiers for constants need to be descriptive of what the value does , not what the value is , and that's where your colleague has made a left turn into the weeds. In the field-splitting application described above, the semicolon's purpose is a field separator, and the constants should be named accordingly: private const char FieldSeparator = ';'; // Will become '|' a month from now
private const char RecordSeparator = ' ';
private const int MaxFieldsPerRecord = 10; This way, when the field separator changes, you change exactly one line of code, the declaration of the constant. Someone looking at the change will see just that one line and will immediately understand that the field separator changed from a semicolon to a pipe symbol. The remainder of the code, which didn't need to change because it was using a constant, remains the same, and the reader doesn't have to dig through it to see what else was done to it. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/221034",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/35676/"
]
} |
221,075 | I am a new software developer and I wish to sell my software. I recently realized that from C++ code we can not stop the user seeing parts of the code that are related to scripts or system commands. Would you make some comments on how software written in C++/JAVA (distributed via CD-ROMs or available via download) is protected from reverse engineering, scanners for when the code is in memory and direct copy of parts (as system commands). What a small software company which just starts producing software should do to protect its product from the technological point of view (it should not be able to pay legal fees …)? | I've been writing software for many years and 2 decades ago I used to think along the lines that you're describing and try and work out ways to protect my software. To answer your question: Software protection is done through encryption and obfuscation just like data protection. The shortfalls, as you described, is that much code is very difficult to protect and usually relatively easy to reverse engineer. One way to protect your code (from decompiling) is to never release a compiled version of it and always run it from a server and your consumers use a client such as a browser. That does not however protect you from reverse engineering. The defense, in my opinion, is to not waste cycles (time/money) trying to protect your code but rather be continually innovating and adding more value to your product over time. If a competitor got hold of your code and took (say) 1 month to decompile it and released a competing product then you should be a step ahead of them with your next feature or next product by that time. Spending time and effort to block out your competition by keeping everything secret would be better spent by encouraging your team to be innovative and stay ahead of the competition. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/221075",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/111434/"
]
} |
221,268 | int *itPins = pins;
for(int i = 0; i < count; i++)
{
ConfigureGpifPinAsGpioOutput(itPins);
itPins++;
} I have the impression that doing two things in one line is bad practice. This is why I am advancing the iterator outside of the only statement, using it. However, I feel blindly following a rule. Is this code less error-prone than if the ++ is inside the function invocation? Why? Does this rule apply to such ultra-simple cases or is it meaningful only for more complex program structures? | It is not always bad practice to do multiple things on one line, but it does have a higher risk of making the program appear more complicated that it needs to be. If I were pedantic, I could argue that in the line for(int i = 0; i < count; i++) , you are also doing multiple things, but very few people would mark that line as a violation of the rule. On the other hand, if you had a line like ConfigureGpifPinAsGpioOutput(itPins++); , then the increment in the argument does increase the complexity as I would have to look twice to be sure that the code is correct and I would probably flag it as needlessly complicated. Another way to write the loop is like this: for(int i = 0; i < count; i++)
{
ConfigureGpifPinAsGpioOutput(pins[i]);
} where you rid yourself completely of the issue. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/221268",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/54268/"
]
} |
221,365 | I would like to release a software library written in a class-based, object-oriented programming language (Java) on a web-based source code hosting service , that allows forks of the project to be merged into the main project (GitHub via pull requests). I have researched on the web and given a lot of thought on how to license the software.
Am I correct in the following assumptions (from an IANAL perspective)? Both LGPL and MPL promote sharing of modifications to the LGPL/MPL licensed software being used inside other software projects. Instead of requiring the users of the modified library to host a seperate fork of the library, I can promote contributing to the original library (e.g. via pull requests). The major difference is how MPL / LGPL licensed code must be linked into the project. MPL source code files can be directly copied into a (possibly) proprietary software project ( static linking ), while LGPL licensed code must be dynamically linked (loosely linked to the possibly proprietary software project, so that end-users can switch out the licensed software library for another version of the licensed software library). Dynamic linking and thus LGPL imposes extra obstacles for packaging the proprietary software product, without promoting more contributions to the open source software library than by having static linkage (and thus MPL). There is a modified LGPL which allows static linking. There are no other relevant differences (from an IANAL perspective). The older license versions don't suit my needs as good as the newest ones. As you can see my main requirement is that modifications of the software library which could prove useful to the general public stay open-source, without imposing restrictions on using the software library in a proprietary product. There is no license that also requires extensions of the software library that are relevant to the original work to be released as open-source, as the scope of the term relevant can be arbitrarily small / huge, thus ending up as GPL that can not be used in a proprietary product (without releasing the whole source). I am tempted to use the modified LPGL , but on the other hand discouraged by the unpopularity. Based on the above points I prefer MPL. Question: Are my above statements correct? Which license should I pick considering my requirements? Solution: With the help of the discussion in the accepted answer, I choose to stick to the MPL because of the popularity , freedom in linking and because it is an official, unmodified license . | I believe you've stated the differences between the Mozilla Public License and the GNU Lesser General Public License accurately, and either may suit your needs just fine, but you are skipping over the most important difference between the two licenses: Who can make new versions? Both the MPL (section 10) and the LGPL (section 14) include in their license grants the right to substitute the current version with a latter version, and there are no actual limitations as to what can go into those licenses. While it's highly unlikely that either the Mozilla Foundation or the Free Software Foundation will do something as crazy as, say, institute a clause that says "all contributions to this software become our property", it's not beyond the realm of possibility that one of the organizations will release a new license version that you don't like. Which brings up another point about using a "Modified LGPL". A modified license is not the same license! While you have fairly amazing ability to specify your own licensing terms, and could in essence say "you can distribute this as per the GPL, but you need to put my name in your credits and pay me 1% of any revenue you generate" , any time you do so you are creating a new license based on someone else's work. This means that you're NOT using the MPL or the LGPL, you're using a new "mucaho license". What that means is that you probably won't get any help from your original license's author if you need to defend your interpretation of the license inside of a courtroom, and it's entirely possible that they might file suit to say that THEIR version should apply and not yours. Of course, both of these are minor points. Even "license popularity" doesn't matter unless you expect your code to be directly incorporated into larger projects. Personally, I think the MPL is a better choice if you like proprietary compatibility, or if the choice is between the actual MPL and a different license you have to manually edit based on the LGPL. Unless you have a reason not to use the MPL, go with something backed by a foundation instead of one that might leave you in a courtroom without any aid whatsoever. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/221365",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/111715/"
]
} |
221,442 | Over the last few months, I stumbled a few times over the following technique / pattern. However, I can't seem to find a specific name, nor am I a 100% sure about all its advantages and disadvantages. The pattern goes as follows: Within a Java interface, a set of common methods is defined as usual. However, using an inner class, a default instance is leaked through the interface. public interface Vehicle {
public void accelerate();
public void decelerate();
public static class Default {
public static Vehicle getInstance() {
return new Car(); // or use Spring to retrieve an instance
}
}
} For me, it seems that the biggest advantage lies in the fact that a developer only needs to know about the interface and not its implementations, e.g. in case he quickly wants to create an instance. Vehicle someVehicle = Vehicle.Default.getInstance();
someVehicle.accelerate(); Furthermore, I have seen this technique being used together with Spring in order to dynamically provide instances depending on the configuration. In this regard, it also looks like this can help with modularization. Nevertheless, I can't shake the feeling that this is a misuse of the interface since it couples the interface with one of its implementations. (Dependency inversion principle etc..) Could anybody please explain to me how this technique is called, as well as its advantages & disadvantages? Update: After some time for consideration, I rechecked and noticed that the following singleton version of the pattern was used far more often. In this version, a public static instance is exposed through the interface which is initialized only once (due to the field being final). In addition, the instance is almost always retrieved using Spring or a generic factory which decouples the interface from the implementation. public interface Vehicle {
public void accelerate();
public void decelerate();
public static class Default {
public static final Vehicle INSTANCE = getInstance();
private static Vehicle getInstance() {
return new Car(); // or use Spring/factory here
}
}
}
// Which allows to retrieve a singleton instance using...
Vehicle someVehicle = Vehicle.Default.INSTANCE; In a nutshell: it seems that this is a custom singleton/factory pattern, which basically allows to expose an instance or a singleton through its interface. With respect to the disadvantages, a few have been named in the answers & comments below. So far, the advantage seems to lie in its convenience. | Default.getInstance is IMHO just a very specific form of a factory method , mixed up with a naming convention taken from the singleton pattern (but without being an implementation of the latter). In the current form this is a violation of the " single responsibility principle ", because the interface (which already serves the purpose of declaring a class API) takes the additional responsibility of providing a default instance, which would be much better placed in a separate VehicleFactory class. The main problem caused by this construct is that it induces a cyclic dependency between Vehicle and Car . For example, in the current form it won't be possible to place Vehicle in one library, and Car in another. Using a separate factory class and placing the Default.getInstance method there would solve that problem. It may be also a good idea to give that method a different name, to prevent any confusion related to singletons. So the result could be something like VehicleFactory.createDefaultInstance() | {
"source": [
"https://softwareengineering.stackexchange.com/questions/221442",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/111810/"
]
} |
221,564 | To make this question answerable, let's assume that the cost of ambiguity in the mind of a programmer is much more expensive then a few extra keystrokes. Given that, why would I allow my teammates to get away with not annotating their function parameters? Take the following code as an example of what could be a far more complex piece of code: let foo x y = x + y Now, a quick examination of the tooltip will show you that F# has determined you meant for x and y to be ints. If that's what you intended, then all is well. But I don't know if that's what you intended. What if you had created this code to concatenate two strings together? Or what if I think you probably meant to add doubles? Or what if I just don't want to have to hover the mouse over every single function parameter to determine its type? Now take this as an example: let foo x y = "result: " + x + y F# now assumes you've probably intended to concatenate strings, so x and y are defined as strings. However, as the poor schmuck who's maintaining your code, I might look at this and wonder if perhaps you had intended to add x and y (ints) together and then append the result to a string for UI purposes. Certainly for such simple examples one could let it go, but why not enforce a policy of explicit type annotation? let foo (x:string) (y:string) = "result: " + x + y What harm is there in being unambiguous? Sure, a programmer could choose the wrong types for what they are trying to do, but at least I know they intended it, that it wasn't just an oversight. This is a serious question... I am still very new to F# and am blazing the trail for my company. The standards I adopt will likely be the basis for all future F# coding, embedded in the endless copy-pasting that I am sure will permeate the culture for years to come. So... is there something special about F#'s type inference that makes it a valuable feature to hold onto, annotating only when necessary? Or do expert F#-ers make a habit of annotating their parameters for non-trivial applications? | I don’t use F#, but in Haskell it is considered good form to annotate (at least) top-level definitions, and sometimes local definitions, even though the language has pervasive type inference. This is for a few reasons: Reading When you want to know how to use a function, it’s incredibly useful to have the type signature available. You can simply read it, rather than trying to infer it yourself or relying on tools to do it for you. Refactoring When you want to alter a function, having an explicit signature gives you some assurance that your transformations preserve the intent of the original code. In a type-inferred language, you may find that highly polymorphic code will typecheck but not do what you intended. The type signature is a “barrier” that concretises type information at an interface. Performance In Haskell, the inferred type of a function may be overloaded (by way of typeclasses), which may imply a runtime dispatch. For numeric types, the default type is an arbitrary-precision integer. If you don’t need the full generality of these features, then you can improve performance by specialising the function to the specific type you need. For local definitions, let -bound variables, and formal parameters to lambdas, I find that type signatures usually cost more in code than the value they would add. So in code review, I would suggest you insist on signatures for top-level definitions and merely ask for judicious annotations elsewhere. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/221564",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/57863/"
]
} |
221,615 | Large codebases are more difficult to maintain when they are written in dynamic languages. At least that's what Yevgeniy Brikman, lead developer bringing the Play Framework to LinkedIn says in a video presentation recorded at JaxConf 2013 (minute 44). Why does he say this? What are the reasons? | dynamic languages make for harder to maintain large codebases Caveat: I have not watched the presentation. I have been on the design committees for JavaScript (a very dynamic language), C# (a mostly static language) and Visual Basic (which is both static and dynamic), so I have a number of thoughts on this subject; too many to easily fit into an answer here. Let me begin by saying that it is hard to maintain a large codebase, period . Big code is hard to write no matter what tools you have at your disposal. Your question does not imply that maintaining a large codebase in a statically-typed language is "easy"; rather the question presupposes merely that it is an even harder problem to maintain a large codebase in a dynamic language than in a static language. That said, there are reasons why the effort expended in maintaining a large codebase in a dynamic language is somewhat larger than the effort expended for statically typed languages. I'll explore a few of those in this post. But we are getting ahead of ourselves. We should clearly define what we mean by a "dynamic" language; by "dynamic" language I mean the opposite of a "static" language. A "statically-typed" language is a language designed to facilitate automatic correctness checking by a tool that has access to only the source code, not the running state of the program. The facts that are deduced by the tool are called "types". The language designers produce a set of rules about what makes a program "type safe", and the tool seeks to prove that the program follows those rules; if it does not then it produces a type error. A "dynamically-typed" language by contrast is one not designed to facilitate this kind of checking. The meaning of the data stored in any particular location can only be easily determined by inspection while the program is running. (We could also make a distinction between dynamically scoped and lexically scoped languages, but let's not go there for the purposes of this discussion. A dynamically typed language need not be dynamically scoped and a statically typed language need not be lexically scoped, but there is often a correlation between the two.) So now that we have our terms straight let's talk about large codebases. Large codebases tend to have some common characteristics: They are too large for any one person to understand every detail. They are often worked on by large teams whose personnel changes over time. They are often worked on for a long time, with multiple versions. All these characteristics present impediments to understanding the code, and therefore present impediments to correctly changing the code. In short: time is money; making correct changes to a large codebase is expensive due to the nature of these impediments to understanding. Since budgets are finite and we want to do as much as we can with the resources we have, the maintainers of large codebases seek to lower the cost of making correct changes by mitigating these impediments. Some of the ways that large teams mitigate these impediments are: Modularization : Code is factored into "modules" of some sort where each module has a clear responsibility. The action of the code can be documented and understood without a user having to understand its implementation details. Encapsulation : Modules make a distinction between their "public" surface area and their "private" implementation details so that the latter can be improved without affecting the correctness of the program as a whole. Re-use : When a problem is solved correctly once, it is solved for all time; the solution can be re-used in the creation of new solutions. Techniques such as making a library of utility functions, or making functionality in a base class that can be extended by a derived class, or architectures that encourage composition, are all techniques for code re-use. Again, the point is to lower costs. Annotation : Code is annotated to describe the valid values that might go into a variable, for instance. Automatic detection of errors : A team working on a large program is wise to build a device which determines early when a programming error has been made and tells you about it so that it can be fixed quickly, before the error is compounded with more errors. Techniques such as writing a test suite, or running a static analyzer fall into this category. A statically typed language is an example of the latter; you get in the compiler itself a device which looks for type errors and informs you of them before you check the broken code change into the repository. A manifestly typed language requires that storage locations be annotated with facts about what can go into them. So for that reason alone, dynamically typed languages make it harder to maintain a large codebase, because the work that is done by the compiler "for free" is now work that you must do in the form of writing test suites. If you want to annotate the meaning of your variables, you must come up with a system for doing so, and if a new team member accidentally violates it, that must be caught in code review, not by the compiler. Now here is the key point I have been building up to: there is a strong correlation between a language being dynamically typed and a language also lacking all the other facilities that make lowering the cost of maintaining a large codebase easier , and that is the key reason why it is more difficult to maintain a large codebase in a dynamic language. And similarly there is a correlation between a language being statically typed and having facilities that make programming in the larger easier. Let's take JavaScript for example. (I worked on the original versions of JScript at Microsoft from 1996 through 2001.) The by-design purpose of JavaScript was to make the monkey dance when you moused over it. Scripts were often a single line. We considered ten line scripts to be pretty normal, hundred line scripts to be huge, and thousand line scripts were unheard of. The language was absolutely not designed for programming in the large, and our implementation decisions, performance targets, and so on, were based on that assumption. Since JavaScript was specifically designed for programs where one person could see the whole thing on a single page, JavaScript is not only dynamically typed, but it also lacks a great many other facilities that are commonly used when programming in the large: There is no modularization system; there are no classes, interfaces, or even namespaces. These elements are in other languages to help organize large codebases. The inheritance system -- prototype inheritance -- is both weak and poorly understood. It is by no means obvious how to correctly build prototypes for deep hierarchies (a captain is a kind of pirate, a pirate is a kind of person, a person is a kind of thing...) in out-of-the-box JavaScript. There is no encapsulation whatsoever; every property of every object is yielded up to the for-in construct, and is modifiable at will by any part of the program. There is no way to annotate any restriction on storage; any variable may hold any value. But it's not just the lack of facilities that make programming in the large easier. There are also features that make it harder. JavaScript's error management system is designed with the assumption that the script is running on a web page, that failure is likely, that the cost of failure is low, and that the user who sees the failure is the person least able to fix it: the browser user, not the code's author. Therefore as many errors as possible fail silently and the program keeps trying to muddle on through. This is a reasonable characteristic given the goals of the language, but it surely makes programming in the larger harder because it increases the difficulty of writing test cases. If nothing ever fails it is harder to write tests that detect failure! Code can modify itself based on user input via facilities such as eval or adding new script blocks to the browser DOM dynamically. Any static analysis tool might not even know what code makes up the program! And so on. Clearly it is possible to overcome these impediments and build a large program in JavaScript; many multiple-million-line JavaScript programs now exist. But the large teams who build those programs use tools and have discipline to overcome the impediments that JavaScript throws in your way: They write test cases for every identifier ever used in the program. In a world where misspellings are silently ignored, this is necessary. This is a cost. They write code in type-checked languages and compile that to JavaScript, such as TypeScript. They use frameworks that encourage programming in a style more amenable to analysis, more amenable to modularization, and less likely to produce common errors. They have good discipline about naming conventions, about division of responsibilities, about what the public surface of a given object is, and so on. Again, this is a cost; those tasks would be performed by a compiler in a typical statically-typed language. In conclusion, it is not merely the dynamic nature of typing that increases the cost of maintaining a large codebase. That alone does increase costs, but that is far from the whole story. I could design you a language that was dynamically typed but also had namespaces, modules, inheritance, libraries, private members, and so on -- in fact, C# 4 is such a language -- and such a language would be both dynamic and highly suited for programming in the large. Rather it is also everything else that is frequently missing from a dynamic language that increases costs in a large codebase. Dynamic languages which also include facilities for good testing, for modularization, reuse, encapsulation, and so on, can indeed decrease costs when programming in the large, but many frequently-used dynamic languages do not have these facilities built in. Someone has to build them, and that adds cost. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/221615",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/19481/"
]
} |
221,632 | Is it better to have either a Deterministic test suite, that results in the same tests succeeding Non-deterministic test suite, which potentially possibly covers more cases ? Example: You write a test suite to test controller functionality in an MVC application. The controller requires application data from a database as input during the test. There are two options to do this: You hardcode which row(s) from the test database are selected as input (e.g. the 10th and 412th row) You use a random number generator to pseudorandomly pick the data from the database (two rows selected by a random number generator) The first is deterministic: every run of the test for the same revision of code should yield the same result. The second is non-deterministic: every run of the test suite has the possibility to yield a different result. The randomly picked data might however be a better representation of data edge cases. It might simulate a user feeding our controllers with unpredictable data better? What are reasons to choose one over the other? | When every run of the test suite gives you the possibility to yield a different result, the test is almost completely worthless - when the suite shows you a bug, you have a high chance that you cannot reproduce it, and when you try to fix the bug, you cannot verify whether your fix works (or not). So when you think you need to use some kind of random number generator for generating of your test data, either make sure you always initialize the generator with the same seed, or persist your random test data in a file before feeding it into your test, so you can re-run the test again with exact the same data from the run before. This way, you can transform any non-deterministic test into a deterministic one. EDIT: Using a random number generator to pick some test data is IMHO sometimes a sign for being too lazy about picking good test data. Instead of throwing 100,000 randomly choosen test values and hope that this will be enough to discover all serious bugs by chance, better use your brain, pick 10 to 20 "interesting" cases, and use them for the test suite. This will not only result in a better quality of your tests, but also in a much higher performance of the suite. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/221632",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/60469/"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.