source_id
int64
1
4.64M
question
stringlengths
0
28.4k
response
stringlengths
0
28.8k
metadata
dict
164,056
Traditionally we performed code review before commit, I had an argument with my colleague today, who preferred code review after commit. First, here's some background, We have some experienced developers and we also have new hires with almost zero programming experience. We'd like to perform fast and short iterations to release our product. All team members are located at the same site. The advantages of code review before commit I've learned: Mentor new hires Try to prevent errors, failures, bad designs early in the development cycle Learn from others Knowledge backup if someone quits But I've also had some bad experiences: Low efficiency, some changes may be reviewed over days Hard to balance speed and quality, especially for newbies One team member felt distrust As to post-commit review, I know little about this, but the thing I'm most worried about is the risk of losing control due to lack of review. Any opinions? UPDATE: We're using Perforce for VCS We code and commit in the same branches (trunk or bug fixing branches) To improve efficiency, we've tried to split code into small changes. We've also tried some live dialog review, but not everyone followed the rule. This is another problem though.
Like Simon Whitehead mentions in his comment , it depends on your branching strategy. If the developers have their own private branch for development (which I'd recommend in most situations anyway), I'd perform the code review prior to merging with the trunk or main repository. This will allow developers to have the freedom to check in as frequently as they want during their development/testing cycle, but any time code goes into the branch that contains the delivered code, it gets reviewed. Generally, your bad experiences with code reviews sound more like a problem with the review process that have solutions. By reviewing code in smaller, individual chunks, you can make sure it doesn't take too long. A good number is that 150 lines of code can be reviewed in an hour, but the rate will be slower for people unfamiliar with the programming language, the system under development, or the criticality of the system (a safety critical requires more time) - this information might be useful to improve efficiency and decide who participates in code reviews.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/164056", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/42320/" ] }
164,128
I'm working with a SQL Server database with 1000+ tables, another few hundred views, and several thousand stored procedures. We are looking to start using Entity Framework for our newer projects, and we are working on our strategy for doing so. The thing I'm hung up on is how best to split the tables into different models (EDMX or DbContext if we go code first). I can think of a few strategies right off the bat: Split by schema We have our tables split across probably a dozen schemas. We could do one model per schema. This isn't perfect, though, because dbo still ends up being very large, with 500+ tables / views. Another problem is that certain units of work will end up having to do transactions that span multiple models, which adds to complexity, although I assume EF makes this fairly straightforward. Split by intent Instead of worrying about schemas, split the models by intent. So we'll have different models for each application, or project, or module, or screen, depending on how granular we want to get. The problem I see with this is that there are certain tables that inevitably have to be used in every case, such as User or AuditHistory. Do we add those to every model (violates DRY I think), or are those in a separate model that is used by every project? Don't split at all - one giant model This is obviously simple from a development perspective but from my research and my intuition this seems like it could perform terribly, both at design time, compile time, and possibly run time. What is the best practice for using EF against such a large database? Specifically what strategies do people use in designing models against this volume of DB objects? Are there options that I'm not thinking of that work better than what I have above? Also, is this a problem in other ORMs such as NHibernate? If so have they come up with any better solutions than EF?
Personally, I've tried making one huge schema for all my entities on a fairly complex but small project(~300 tables) . We had an extremely normalized database (5th form normalization (I say that loosely)) with many "many to many" relationships and extreme referential integrity enforcement. We also used a "single instance per request" strategy which I'm not convinced helped either. When doing simple, reasonably flat "explicitly defined" listings, lookups and saves the performance was generally acceptable. But when we started digging into deep relationships the performance seemed to take drastic dips. Compared to a stored proc in this instance, there was no comparison (of course). I'm sure we could've tweaked the code base here and there to get the performance improved, however, in this case we just needed performance boost without analysis due to time constraints, and we fell back to the stored proc (still mapped it through EF, because EF provided strongly typed results), we only needed that as a fall back in a few area's. When we had to traverse all over the database to create a collection (using .include() unsparingly), the performance was noticeably degrading, but maybe we were asking too much.. So based on my experience, i would recommend creating a separate .edmx per intent. Only generate what you'll be using based on the scope of that need. You may have some smaller scoped .edmx files for purposed tasks, and then some large ones where you need to traverse complex relationships to build objects. I'm not sure where that magic spot is, but I'm sure there is one... lol... Honestly though, aside from a few pitfalls which we kind of saw coming (complex traversing), the huge .edmx worked fine from a "working" perspective. But you'll have to watch out for the "fixup" magic that the context does behind the scene's if you don't explicitly disable it. As well as keeping the .edmx in sync when changes to the database are made.. it was sometimes easier to wipe the entire surface and re-create the entities, which took like 3 minutes so it wasn't a big deal. This was all with EntityFramework 4.1. I'd be really interested in hearing about your end choice and experience as well. And regarding you're question on nHibernate, that's a can of worms question in my opinion, you'll get barking on both sides of the fence... I hear a lot of people bashing EF for the sake of bashing without working through the challenges and understanding the nuances unique to EF itself.. and although I've never used nHibernate in production, generally, if you have to manually and explicitly create things like mappings, you're going to get more finite control, however, if you can drag n' drop , generate, and start CRUD'ing and querying using LINQ, I could give a crap about granularity. I hope this helps.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/164128", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/3124/" ] }
164,256
I typically agree with most code analysis warnings, and I try to adhere to them. However, I'm having a harder time with this one: CA1031: Do not catch general exception types I understand the rationale for this rule. But, in practice, if I want to take the same action regardless of the exception thrown, why would I handle each one specifically? Furthermore, if I handle specific exceptions, what if the code I'm calling changes to throw a new exception in the future? Now I have to change my code to handle that new exception. Whereas if I simply caught Exception my code doesn't have to change. For example, if Foo calls Bar, and Foo needs to stop processing regardless of the type of exception thrown by Bar, is there any advantage in being specific about the type of exception I'm catching? Maybe a better example: public void Foo() { // Some logic here. LogUtility.Log("some message"); } public static void Log() { try { // Actual logging here. } catch (Exception ex) { // Eat it. Logging failures shouldn't stop us from processing. } } If you don't catch a general exception here, then you have to catch every type of exception possible. Patrick has a good point that OutOfMemoryException shouldn't be dealt with this way. So what if I want to ignore every exception but OutOfMemoryException ?
These rules are generally a good idea and thus should be followed. But remember these are generic rules. They don't cover all situations. They cover the most common situations. If you have a specific situation and you can make the argument that your technique is better (and you should be able to write a comment in the code to articulate your argument for doing so) then do so (and then get it peer reviewed). On the counter side of the argument. I don't see your example above as a good situation for doing so. If the logging system is failing (presumably logging some other exception) then I probably do not want the application to continue. Exit and print the exception to the output so the user can see what happened.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/164256", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/29526/" ] }
164,273
http://www.dartlang.org/ I've checked out the site very briefly, and got curious. Is there any advantages of using Dart? Is it just a replacement for JavaScript? It looks like simpler Java. Writing quite a lot of C# at work, the language feels very much like what I'm used to, so learning the syntax looks like a breeze to learn. Has anybody any opinions or experiences with the language? (Compared to CoffeeScript (= I'm not doing Ruby syntax) the syntax looks more familiar to me).
Thanks for your question! Full disclaimer, I work on the Dart team. Probably the best advantage Dart has today is that it's familiar to C#, Java, C++, and most JavaScript developers. Many developers have a set of expectations around their language (class-based OO, lexical scope, familiar syntax) and their tools (code completion, refactoring, code navigation, debugging) that Dart aims to meet and exceed. Here's some things that I like about the language: Optional static types. When I'm prototyping or simply writing small scripts, I don't use a ton of static types. I just don't need 'em, and I don't want to get bogged down with the ceremony. However, some of those scripts evolve into bigger programs. As the scripts scale, I tend to want classes and static type annotations. Innocent until proven guilty. Dart tries hard to minimize the situations that result in a compile-time error. Many conditions in Dart are warnings, which don't stop your program from running. Why? In keeping with web development fashion, it's imperative to allow developers to try a bit of code, hit reload, and see what happens. The developer shouldn't have to first prove the entire program is correct before just testing a corner of the code. Lexical scope. This is awesome, if you're not used to it. Simply put, the visibility of variables, and even this , is defined by the program structure. This eliminates a class of puzzlers in traditional web programming. No need to re-bind functions to keep this to what you think or expect. Real classes baked into the language. It's clear most developers want to work in classes, as most web development frameworks offer a solution. However, a "class" from framework A isn't compatible with framework B, in traditional web development. Dart uses classes naturally. Top-level functions. One painful part of Java is that everything has to be put into a class. This is a bit artificial, especially when you want to define a few utility functions. In Dart, you can define functions at the top level, outside of any class. This makes library composition feel more natural. Classes have implicit interfaces. The elimination of explicit interfaces simplifies the language. No more need to define IDuck everywhere, all you need now is a class Duck. Because every class has an implicit interface, you can create a MockDuck implements Duck Named constructors. You can give constructors names, which really helps with readibility. For example: var duck = new Duck.fromJson(someJsonString) Factory constructors. The factory pattern is quite common, and it's nice to see this baked into the language. A factory constructor can return a singleton, an object from a cache, or an object of a sub-type. Isolates. Gone are the days of sharing mutable state between threads (an error prone technique). A Dart isolate is an isolated memory heap, able to run in a separate process or thread. Isolates communicate by sending messages over ports. Isolates work in the Dart VM and can compile to Web workers in HTML5 apps. Dart compiles to JavaScript. This is critically important, as JavaScript is the lingua franca of the web. Dart apps should run across the modern web. Strong tooling. The Dart project also ships an editor. You'll find code completion, refactoring, quick fixes, code navigation, debugging, and more. Also, IntelliJ has a Dart plugin. Libraries. You can organize Dart code into libraries, for easier namespacing and reusability. Your code can import a library, and libraries can re-export. String interpolation. This is just a nice feature, making it easy to compose a string: var msg = "Hello $friend!"; noSuchMethod Dart is a dynamic language, and you can handle arbitrary method calls with noSuchMethod() . Generics. Being able to say "this is a list of apples" gives your tools much more info to help you and catch potential errors early. Luckily, though, Dart's generics are more simple that what you're probably used to. Operator overloading. Dart classes can define behavior for operators like + or - . For example, you could write code like new Point(1,1) + new Point(2,2) . Having said all that, there are many more JavaScript libraries out there. Personally, I believe there's room on the web for many languages. If the app is awesome, and it runs in the majority of modern browsers, I don't care as much what language it is written in. As long as you, the developer, are happy, productive, and launching on the web, that's what matters! :)
{ "source": [ "https://softwareengineering.stackexchange.com/questions/164273", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/37175/" ] }
164,332
I get confused when people try to make a distinction between compiled languages and managed languages. From experience, I understand that most consider compiled languages to be C,C++ while managed languages are Java,C# (There are obviously more, but these are just few examples). But what exactly is the core difference between the two types of languages? My understanding is that any program, regardless of what language you use is essentially "compiled" into a low-level machine code which is then interpreted, so does that kinda make managed languages a subset of compiled languages (That is, all managed languages are compiled languages but not the other way around)?
The difference is not in "compiled" vs. "managed", these are two orthogonal axes. By "managed" they normally mean a presence of a garbage-collected memory management and/or a presence of a virtual machine infrastructure. Both has absolutely nothing to do with compilation and whatever people deem to be opposite to it. All this "differences" are quite blurred, artificial and irrelevant, since it is always possible to mix managed and unmanaged memory in a single runtime, and a difference between compilation and interpretation is very vague too.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/164332", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/62049/" ] }
164,429
It's like this mythical thing that a dominating portion of developers say is just the best option for back-end development, a part of development about which I know virtually nothing beyond the absolute basics. So I've looked up PHP tutorials a bunch of times, trying to figure out why it's so powerful and common, but it's annoying as hell-- all the tutorials treat you like a new programmer. You know, this is how you make an If Else statement, here's a for loop, etc. The "Advanced Topics" show you how to make POST and GET statements and whatnot. But there must be more to it! I don't get it! That's practically no different from JavaScript. What am I missing about this language? What else can it do? Where's the power and versatility? I've heard it called a function soup; where are all the functions? Please chide me. I'm clearly missing something.
PHP is terrible. For more information you can check Php a fractal of bad design I have been working with it close to 13 years now, the last 5 not as a hobby, not even dared to jump deep into its OO capabilities, and I am still finding bugs, gotchas, ahas and plain insane behavior on daily basis. So if you want to find something unique in PHP that elevates it beyond the other languages - there isn't. So why it became the alpha dog then: You have no choice - unless you start a project from scratch there is usually something already written in PHP that works. (inertia) It was extremely easy to prototype and iterate - this is more of a historical reason, there are other languages that are good at that now, but c. 1999 there weren't many of them. Played nice with apache, mysql and was free - maybe the most important thing in this list. Hosting was easy to find Easy database access - there were dark times when there were such things as DAO with wild recordsets out to get you. For a mfc developer the idea that your whole DB layer was mysql_connect, mysql_query with simple sql was a godsend. Simple tool for a simple task - php is really good at getting data from a DB and putting it in a punched hole in your html. And at the dawn of time that was website development.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/164429", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/56275/" ] }
164,447
Can anyone suggest me a Common Name for Edit,Save,Delete,Select I want to create a Interface in C#, which will support all these methods. My Context is : I am developing an Invoice Application in which i need a Product to be Created,Deleted,Updated,Selected
CRUD is usually the technical term that is used to describe create/read/update/delete functionality. ... create, read, update and delete (CRUD) (Sometimes called SCRUD with an "S" for Search) are the four basic functions of persistent storage . Sometimes CRUD is expanded with the words retrieve instead of read , modify instead of update , or destroy instead of delete. It is also sometimes used to describe user interface conventions that facilitate viewing, searching, and changing information; often using computer-based forms and reports... The acronym may be extended to CRUDL to cover listing of large data sets which bring additional complexity such as pagination when the data sets are too large to hold easily in memory. Another variation of CRUD is BREAD, an acronym for "Browse, Read, Edit, Add, Delete"...
{ "source": [ "https://softwareengineering.stackexchange.com/questions/164447", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/41730/" ] }
164,472
To be interchangable and testable, normally services with logic needs to have interface, e.g. public class FooService: IFooService { ... } Design-wise, I agree with this, but one of the things that bothers me with this approach is that for one service you will need to declare two things (the class and the interface), and in our team, normally two files (one for the class and one for the interface). Another discomfort is the difficulty in navigation because using "Go to definition" in IDE (VS2010) will point to the interface (since other classes refer to the interface), not the actual class. I was thinking that writing IFooService in the same file as FooService will reduce the above weirdness. After all, IFooService and FooService are very related. Is this a good practice? Is there a good reason that IFooService must be located in its own file?
It doesn't have to be in its own file, but your team should decide on a standard and stick to it. Also, you're right that "Go to definition" takes you to the interface, but if you have Resharper installed, it's only one click to open a list of derived classes/interfaces from that interface, so it's not a big deal. That's why I keep the interface in a separate file.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/164472", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/8486/" ] }
164,618
For the past 6 months or more, I've been seeing many codes hosted at sourceforge.net as well as other hosting sites "Move to GitHub". A mere Google Search with the phrase "Moved to Github" returns several results containing the text moved to github. This is very confusing for me, and I'm wondering, why exactly are people moving? Does it mean that GitHub is better or is there some special advantage I'm not seeing?
This is a symptom of a wider migration towards distributed version control systems . Some websites which traditionally hosted non distributed VCS (eg Codeplex & SourceForge) were a little slow in adding support for DVCS (eg Git or Mercurial). So, people who wanted to use DVCS for their project were forced to migrate their projects over to the providers which supported them (eg Github or BitBucket). Github was one of the first to offer DVCS support and so naturally a lot of people migrated their code there in order to take advantage of it. Those other websites are only now starting to catch up to DVCS (Codeplex for example now supports Mercurial & Git), but they are still a way behind in terms of features such as forking and submitting pull requests. To really take advantage of DVCS Github and Bitbucket are still the best options.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/164618", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/19276/" ] }
164,629
I am developing a asp.net website. When I used the CSS property "word-wrap", VIsual Studio 2010 is showing a warning: Validation (CSS 2.1) 'word-wrap' is not a known CSS property name. When I tested the website, it is working fine. However, can there be any issue in using this property ignoring the warning?
This is a symptom of a wider migration towards distributed version control systems . Some websites which traditionally hosted non distributed VCS (eg Codeplex & SourceForge) were a little slow in adding support for DVCS (eg Git or Mercurial). So, people who wanted to use DVCS for their project were forced to migrate their projects over to the providers which supported them (eg Github or BitBucket). Github was one of the first to offer DVCS support and so naturally a lot of people migrated their code there in order to take advantage of it. Those other websites are only now starting to catch up to DVCS (Codeplex for example now supports Mercurial & Git), but they are still a way behind in terms of features such as forking and submitting pull requests. To really take advantage of DVCS Github and Bitbucket are still the best options.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/164629", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/58676/" ] }
164,843
For some systems, the time value 9999-12-31 is used as the "end of time" as the end of the time that the computer can calculate. But what if it changes? Wouldn't it be better to define this time as a builtin variable? In C and other programming languages there usually is a variable such as MAX_INT or similar to get the largest value an integer could have. Why is there not a similar function for MAX_TIME i.e. set the variable to the "end of time" which for many systems usually is 9999-12-31. To avoid the problem of hardcoding to a wrong year (9999) could these systems introduce a variable for the "end of time"? **Real example ** End of validity date: 31/12/9999. (official documents are listed like this) The blogger wants to write a page that is always on top, the welcome page. So it's given a date as far in the future as possible: 3000? Yes, the welcome page which you're facing is posted at 1 January 3000. So this page will be kept on the top of the blog forever =) It's actually posted at 31 august 2007.
Ask yourself why you need such a variable in the first place. Most likely, you are lying about your data: whenever you need an "end of time" variable, you are not referring to the actual end of time; rather you are expressing things like "there is no upper bound for this date", "this event continues indefinitely", or similar. The correct solution, then, is to express these intents directly instead of relying on a magic value: use nullable date types (where null indicates "no end date set"), add an "indefinite" boolean field, use a polymorphic wrapper (which can be either a real date or a special "indefinite" value), or whatever your programming language has to offer. Of course, the correct solution is not always feasible, so you might end up using a magic value after all, but when you do, you have to decide on a suitable value on a per-case basis, because which dates do and do not make sense depends on the domain you're modelling - if you're storing log timestamps, 01/01/2999 is a reasonable "end of time"; the chances of your application still being used almost 1000 years from now are, I would reckon, practically zero. Similar considerations go for calendar applications. But what if your software is to handle scientific data, say, long-term predictions about the Earth's climate? Those might actually want to look a thousand years into the future. Or take it one step further; astronomy, a field where it is perfectly normal to reason in very large timespans on the order of billions of years, both into the path and the future. For those, 01/01/2999 is a perfectly ridiculous arbitrary maximum. OTOH, a calendar system that is able to handle timespans ten trillion years into the future is hardly practical for a dentist appointment tracking system, if only because of storage capacity. In other words, there is no single best choice for a value that is wrong and arbitrary by definition to begin with. This is why it is really uncommon to see one defined in any programming language; those that do usually don't name it "end of time", but rather something like DATE_MAX (or Date.MAX ), and take it to mean "the largest value that can be stored in the date datatype", not "the end of time" or "indefinitely".
{ "source": [ "https://softwareengineering.stackexchange.com/questions/164843", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/12893/" ] }
164,844
We develop small to large visualization projects for different tasks and industries and sometimes while rewriting them a couple of times in the process we hit walls because we discover that we need to add a lot of code to support new requirements. Now we have established a design process that seems to work well (at least we reduced the development time for each new project quite a bit), but we're still left scratching our heads around this question: what exactly should we test when testing visualizations? If everything that we want to explore is on the screen (bounded visualizations)? If the data is ok - if data is valid (that's one of the nice things about visualizations you can spot errors in your datasets)? Usability? User interaction? Code quality? I can tell you for sure that a simple check of the code quality is certainly not enough! Is there a classic paper / book about how to test visualizations? Also do you happen to know about classic design patterns for visualizations (except the obvious ones like Pub-Sub)?
Ask yourself why you need such a variable in the first place. Most likely, you are lying about your data: whenever you need an "end of time" variable, you are not referring to the actual end of time; rather you are expressing things like "there is no upper bound for this date", "this event continues indefinitely", or similar. The correct solution, then, is to express these intents directly instead of relying on a magic value: use nullable date types (where null indicates "no end date set"), add an "indefinite" boolean field, use a polymorphic wrapper (which can be either a real date or a special "indefinite" value), or whatever your programming language has to offer. Of course, the correct solution is not always feasible, so you might end up using a magic value after all, but when you do, you have to decide on a suitable value on a per-case basis, because which dates do and do not make sense depends on the domain you're modelling - if you're storing log timestamps, 01/01/2999 is a reasonable "end of time"; the chances of your application still being used almost 1000 years from now are, I would reckon, practically zero. Similar considerations go for calendar applications. But what if your software is to handle scientific data, say, long-term predictions about the Earth's climate? Those might actually want to look a thousand years into the future. Or take it one step further; astronomy, a field where it is perfectly normal to reason in very large timespans on the order of billions of years, both into the path and the future. For those, 01/01/2999 is a perfectly ridiculous arbitrary maximum. OTOH, a calendar system that is able to handle timespans ten trillion years into the future is hardly practical for a dentist appointment tracking system, if only because of storage capacity. In other words, there is no single best choice for a value that is wrong and arbitrary by definition to begin with. This is why it is really uncommon to see one defined in any programming language; those that do usually don't name it "end of time", but rather something like DATE_MAX (or Date.MAX ), and take it to mean "the largest value that can be stored in the date datatype", not "the end of time" or "indefinitely".
{ "source": [ "https://softwareengineering.stackexchange.com/questions/164844", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/57883/" ] }
164,886
I have been programming for a couple of years and have often found myself at a dilemma. There are two solutions - one is simple one i.e. simple approach, easier to understand and maintain. It involves some redundancy, some extra work (extra IO, extra processing) and therefore is not the most optimal solution. but other uses a complex approach,difficult to implement, often involving interaction between lot of modules and is a performance efficient solution. Which solution should I strive for when I do not have hard performance SLA to meet and even the simple solution can meet the performance SLA? I have felt disdain among my fellow developers for simple solution. Is it good practice to come up with most optimal complex solution if your performance SLA can be met by a simple solution?
Which solution should I strive for when I do not have hard performance SLA to meet and even the simple solution can meet the performance SLA? The simple one. It meets spec, it's easier to understand, it's easier to maintain, and it's probably a whole lot less buggy. What you are doing in advocating the performance efficient solution is introducing speculative generality and premature optimization into your code. Don't do it! Performance goes against the grain of just about every other software engineering 'ility' there is (reliability, maintainability, readability, testability, understandability, ...). Chase performance when testing indicates that there truly is a need to chase after performance. Do not chase performance when performance doesn't matter. Even if it does matter, you should only chase performance in those areas where the testing indicates that a performance bottleneck exists. Do not let performance problems be an excuse to replace simple_but_slow_method_to_do_X() with a faster version if that simple version doesn't show up as a bottleneck. Enhanced performance is almost inevitably encumbered with a host of code smell problems. You've mentioned several in the question: A complex approach, difficult to implement, higher coupling. Are those really worth dragging in?
{ "source": [ "https://softwareengineering.stackexchange.com/questions/164886", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/36922/" ] }
164,988
Note that I (try) to mark up as semantically as possible because I like they way it looks and feels, but not because I'm aware of any other stunning advantages. The point of my question is to be able to educate others Well, I've seen a lot of articles and tutorials which often state "Let's mark this up in the most semantic possible way". But a strange thought came to me, why? Why would one need (or want) to bother with the specific elements which convey the correct semantic meaning? Specifically, I'm referring to the new HTML5 elements, such as <time> , <output> , or <address> . Especially, if the page "works" (it renders nicely in all browsers). Why would I want to use elements like <time> or <address> , where nothing at all (or at the worst case, a generic <span> ) works just as nicely? I'm asking this because I'm seeing a multitude of (very popular) websites (this one included) which does not follow these so-called best practices.
Free Functionality Properly using <label> s means you can click the label to enter the text field. Many browsers will add logical default functionality to many tags per the official specification, meaning you can use fewer JavaScript plugins and write less code than a site made entirely out of <div> s and <span> s. Accessibility Related to free functionality, semantics mean a lot to screen reader software. Text in front of an input field won't be read in quite the same way as a <label> will. Screen readers will ignore most of your CSS, so it's mostly up to the structure of your HTML. Logical CSS Why use a div #header when you can use a <header> and style that directly? Semantic tags make it easier to mark things up and make your style much more portable; if you have a certain style for strikeout and always use <del> elements the style is much more portable. <del> means the same thing to everyone, but everyone will name their .deletedText class differently. It also helps keep everyone on the same page in large projects; no one enjoys learning other people's esoteric class naming conventions. SEO Search engines like Google have made increased use of semantic HTML and metadata . Google's Rich Snippets also use special metadata meant to convey semantic content. Why it's not all that common It takes work, and people are used to judging a website by how it looks and works . Often there's no accounting for semanticness because people who write the business case for apps don't understand it or why it's important. It's very hard for non technical people to understand or evaluate HTML semantics. If a website looks good and it appears to work, why fret? Many people may not even know there is anything more to it. Similar to accessibility, this tends to get ignored until someone on your team really understands this. If you want semantic HTML to be a priority on your project, you need to present the case for it. Showing your team/boss how your website works in a screen reader is also a helpful tool.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/164988", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/38223/" ] }
165,002
It has been a year or so since I last took an assembly class. In that class, we were using MASM with the Irvine libraries to make it easier to program in. After we'd gone through most of the instructions, he said that the NOP instruction essentially did nothing and not to worry about using it. Anyway, it was about midterm and he has some example code that wouldn't run properly, so he told us to add a NOP instruction and it worked fine. I asked I'm after class why and what it actually did, and he said he didn't know. Anybody know?
Often times NOP is used to align instruction addresses. This is usually encountered for example when writing Shellcode to exploit buffer overflow or format string vulnerability . Say you have a relative jump to 100 bytes forwards, and make some modifications to the code. The chances are that your modifications mess up the jump target's address and as such you'd have to also change the aforementioned relative jump. Here, you can add NOP s to push the target address forward. If you have multiple NOP s between the target address and the jump instruction, you can remove the NOP s to pull the target address backward. This would not be a problem if you are working with an assembler which supports labels. You can simply do JXX someLabel (where JXX is some conditional jump) and the assembler will replace the someLabel with the address of that label. However, if you simply modify the assembled machine code(the actual opcodes) by hand(as it sometimes happens with writing shellcode), you also have to change the jump instruction manually. Either you modify it, or then move the target code address by using NOP s. Another use-case for NOP instruction would be something called a NOP sled . In essence the idea is to create a large enough array of instructions which cause no side-effects(such as NOP or incrementing and then decrementing a register) but increase the instruction pointer. This is useful for example when one wants to jump to a certain piece of code which address isn't known. The trick is to place the said NOP sled in front of the target code and then jumping somewhere to the said sled. What happens is that the execution continues hopefully from the array which has no side-effects and it traverses forwards instruction-per-instruction until it hits the desired piece of code. This technique is commonly used in aforementioned buffer overflow exploits and especially to counter security measures such as ASLR . Yet another particular use for the NOP instruction is when one is modifying code of some program. For example, you can replace parts of conditional jumps with NOP s and as such circumvent the condition. This is a often used method when " cracking " copy protection of software. At simplest it's just about removing the assembly code construct for if(genuineCopy) ... line of code and replacing the instructions with NOP s and.. Voilà! No checks are made and non-genuine copy works! Note that in essence both examples of shellcode and cracking do the same; modify existing code without updating the relative addresses of operations which rely on relative addressing.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/165002", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/63693/" ] }
165,185
I've read many times on the web that if your language doesn't support some concept, for example, object orientation, or maybe function calls, and it's considered a good practice in this other context, you should do it. The only problem I can see now is that other programmers may find your code too different than the usual, making it hard for them to program. What other problems do you think may arise from this?
One of the problems is that you may find yourself writing lots of code to express something in a way you will do in another language, while there is a more straightforward way in the language you use. For example, in an answer on Stack Overflow , I've explained how code contracts, concept used in .NET Framework, can be partially emulated in PHP which doesn't support them. I ended up writing lots of code for nothing, since the same thing was doable with simple arrays. More generally, each language has its own culture, its own best practices, its own style. If I start writing C# code like it was C, it would be ugly. If I apprehend Haskell as a Java developer who was forced to use Haskell, but don't want to understand its strengths and just want to clone the concepts of Java, the code I would write will suffer. etc. There is nothing wrong trying to enhance the language (for example enhance C# by introducing units of measure like in F#), but if you are doing it too much, you should maybe choose a different language which actually fits your needs.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/165185", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/51350/" ] }
165,217
This may be subjective and likely to be closed but I still wanted to know if its really helpfull to read Structure and Interpretation of Computer programs. Structure and Interpretation of Computer Programs The book does not use Java. Not that I wanted to learn Java. I am just curious as to know if it be will useful read to be a better programmer and what are the things that I can gain from the book or are their any other alternatives to this book more suited to Java programmers?
Well, I don't know if this book will help you, but when I worked myself through that book about 20 years ago, it definitely improved my programming skills (independently of any programming language). And I guess especially a Java programmer will get some new insights he/she won't get by sticking only to Java. Joel Spolsky 2005 wrote a nice article about Java and SICP which may be of interest for you: http://www.joelonsoftware.com/articles/ThePerilsofJavaSchools.html
{ "source": [ "https://softwareengineering.stackexchange.com/questions/165217", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/20163/" ] }
165,356
I've found the SOLID principles quite useful when thinking about object-oriented design. Is there a similar / equivalent set of language-agnostic principles tailored for functional programming?
SOLID turns out to be a good idea for the functional/imperative realms too. SRP - 'Only do one thing' was taken from imperative programming in the first place. Having small, focused functions is good. OCP - Allowing you to change behaviors without modifying code is good. Functional programming uses higher order functions more than inheritance, but the principle holds. LSP - Abiding by some interface contract is just as good in functional programming as in object oriented. If a sort function takes a comparator, then you would expect the '0 is equals, less than provides negative results, greater than positive results' behavior. ISP - Most functional languages still have structs. Specifying the smallest set of data required by a function is still good practice. Requiring the least specific interface to the data (why use Lists of ints when Enumerations of T work just as well?) is still good practice. DIP - Specifying parameters to a function (or a higher order function to retrieve them) rather than hard coding the function to go get some value is just as good in functional programming as in object oriented. And even when doing object oriented programming, many of these principles apply to the design of methods in the objects too.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/165356", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/6909/" ] }
165,380
I know that learning a language, you can simply buy a book, follow the examples, and whenever possible try the exercises. But what I'm really looking is how to master the language once you've learned it. Now I know that experience is one major factor, but what about learning the internals of the language, what is the underlying structure, etc. There are articles out there saying read this book, read that book, make this game and that game. But to me this doesn't mean to master a language. I want to be able to read other people's code and understand it, no matter how hard that is. To understand when to use a function and when another, etc etc. The list could go on and on but I believe I've made the point. :) And finally, take whatever language as an example if needed, though best would be if C was taken as an example.
I have to answer, "All of the above." People argue about whether coding is an art, a craft, an engineering discipline, or a branch of mathematics, and I think it's fairest to say it's some of each. As such, the more techniques you bring to mastery of the language, the better. Here is a partial list: Use the language all day, every day. Usually this means being full-time employed in the language. Read all you can about the language. Especially, "best practices" and idioms. Join a users group to talk with others about the language and what they do with it. Work with other people's code! There is no faster way to learn what not to do in a language than to have to clean up after someone who did something awful. Support the code you write - every bug becomes a tour of your worst decisions! Study computer science and languages in general Learn a very different language. A great compliment to C would be a functional language like Lisp. This will turn the way you think about your procedural language inside out. Learn to use the frameworks and APIs available for that language. Take the time to do your own experiments with the language. SICP is not applicable to C, but the attitude of learning a language by testing its limits is a very productive one. Read the history of the language to learn why it was made the way it is. Attend conferences to hear the language authors speak, or to hear what industry leaders are doing with the language. Take a class in the language. Teach the language to others (thanks to Bryan Oakley ) In summary, do everything you can think of. There is no way to know everything about most languages. Every learning technique you use brings an additional perspective to your understanding.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/165380", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/52568/" ] }
165,408
My current job is mostly writing GUI test code for various applications that we work on. However, I find that I tend to copy and paste a lot of code within tests. The reason for this is that the areas I'm testing tend to be similar enough to need repetition but not quite similar enough to encapsulate code into methods or objects. I find that when I try to use classes or methods more extensively, tests become more cumbersome to maintain and sometimes outright difficult to write in the first place. Instead, I usually copy a big chunk of test code from one section and paste it to another, and make any minor changes I need. I don't use more structured ways of coding, such as using more OO-principles or functions. Do other coders feel this way when writing test code? Obviously I want to follow DRY and YAGNI principles, but I find that test code (automated test code for GUI testing anyway) can make these principles tough to follow. Or do I just need more coding practice and a better overall system of doing things? EDIT: The tool I'm using is SilkTest, which is in a proprietary language called 4Test. As well, these tests are mostly for Windows desktop applications, but I also have tested web apps using this setup as well.
Copy-pasted and then edited test cases are often fine. Tests should have as few external dependencies as possible, and be as straightforward as possible. Test cases tend to change with time, and previously almost identical test cases may suddenly diverge. Updating one test case without having to worry about breaking other cases is a Good Thing. Of course, boilerplate code which is identical in many test cases and has to change in concert can and should be factored out.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/165408", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/36853/" ] }
165,418
I've found a software library, licensed under the new BSD license. I want to use in my closed source commercial project. Can I do it?
Did you read the license? Because it's pretty short and I think easy to understand. Unless your lawyer tells you otherwise, I'd say that yes, you can use the code, but you have to put their notice & disclaimer in your documentation (about box, whatever).
{ "source": [ "https://softwareengineering.stackexchange.com/questions/165418", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/10071/" ] }
165,444
I have created a simple MVC Java application that adds records through data forms to a database. My app collects data, it also validates it and stores it. This is because the data is being sourced online from different users. the data is mostly numeric in nature. Now on the numeric data being stored into database (SQL server), I want my app to perform computations and display the results. The user is not interested in how computations are done so they must be encapsulated. The user must only be able to view the simple computed data (for example, A column data minus B Column data divided by C column data). I know how to write stored procedures for same but I want a three-tier app. I want the data that I put into the database as a record, worked upon by performing calculations on it. The original data should remain unaffected, while the new data, post-calculations, must be stored as a new entity record into the database. Where should I write the code for this background calculation? As it is the rules and business logic, should I put it in new JavaBeans files?
The business logic should be placed in the model , and we should be aiming for fat models and skinny controllers . As a start point, we should start from the controller logic. For example: on update , your controller should direct your code to the method/service that delivers your changes to the model. In the model, we may easily create helper/service classes where the application business rules or calculations can be validated. A conceptual summary The controller is for application logic. The logic which is specific to how your application wants to interact with the "domain of knowledge" it belongs. The model is for logic that is independent of the application . This logic should be valid in all possible applications of the "domain of knowledge" it belongs. Thus, it is logical to place all business rules in the model.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/165444", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/63850/" ] }
165,543
Advanced compilers like gcc compile code into machine readable files according to the language in which the code has been written (e.g. C, C++, etc). In fact, they interpret the meaning of the code according to library and functions of the corresponding languages. Correct me if I'm wrong. I wish to better understand compilers by writing a very basic compiler (probably in C) to compile a static file (e.g. Hello World in a text file). I tried some tutorials and books, but all of them are for practical cases. They deal with compiling dynamic code with meanings connected with the corresponding language. How can I write a basic compiler to convert a static text into a machine readable file? The next step will be introducing variables into the compiler; imagine that we want to write a compiler which compile only some functions of a language. Introducing practical tutorials and resources is highly appreciated :-)
Intro A typical compiler does the following steps: Parsing: the source text is converted to an abstract syntax tree (AST). Resolution of references to other modules (C postpones this step till linking). Semantic validation: weeding out syntactically correct statements that make no sense, e.g. unreachable code or duplicate declarations. Equivalent transformations and high-level optimization: the AST is transformed to represent a more efficient computation with the same semantics. This includes e.g. early calculation of common subexpressions and constant expressions, eliminating excessive local assignments (see also SSA ), etc. Code generation: the AST is transformed into linear low-level code, with jumps, register allocation and the like. Some function calls can be inlined at this stage, some loops unrolled, etc. Peephole optimization: the low-level code is scanned for simple local inefficiencies which are eliminated. Most modern compilers (for instance, gcc and clang) repeat the last two steps once more. They use an intermediate low-level but platform-independent language for initial code generation. Then that language is converted into platform-specific code (x86, ARM, etc) doing roughly the same thing in a platform-optimized way. This includes e.g. the use of vector instructions when possible, instruction reordering to increase branch prediction efficiency, and so on. After that, object code is ready for linking. Most native-code compilers know how to call a linker to produce an executable, but it's not a compilation step per se. In languages like Java and C# linking may be totally dynamic, done by the VM at load time. Remember the basics Make it work Make it beautiful Make it efficient This classic sequence applies to all software development, but bears repetition. Concentrate on the first step of the sequence. Create the simplest thing that could possibly work. Read the books! Read the Dragon Book by Aho and Ullman. This is classic and is still quite applicable today. Modern Compiler Design is also praised. If this stuff is too hard for you right now, read some intros on parsing first; usually parsing libraries include intros and examples. Make sure you're comfortable working with graphs, especially trees. These things is the stuff programs are made of on the logical level. Define your language well Use whatever notation you want, but make sure you have a complete and consistent description of your language. This includes both syntax and semantics. It's high time to write snippets of code in your new language as test cases for the future compiler. Use your favorite language It's totally OK to write a compiler in Python or Ruby or whatever language is easy for you. Use simple algorithms you understand well. The first version does not have to be fast, or efficient, or feature-complete. It only needs to be correct enough and easy to modify. It's also OK to write different stages of a compiler in different languages, if needed. Prepare to write a lot of tests Your entire language should be covered by test cases; effectively it will be defined by them. Get well-acquainted with your preferred testing framework. Write tests from day one. Concentrate on 'positive' tests that accept correct code, as opposed to detection of incorrect code. Run all the tests regularly. Fix broken tests before proceeding. It would be a shame to end up with an ill-defined language that cannot accept valid code. Create a good parser Parser generators are many . Pick whatever you want. You may also write your own parser from scratch, but it only worth it if syntax of your language is dead simple. The parser should detect and report syntax errors. Write a lot of test cases, both positive and negative; reuse the code you wrote while defining the language. Output of your parser is an abstract syntax tree. If your language has modules, the output of the parser may be the simplest representation of 'object code' you generate. There are plenty of simple ways to dump a tree to a file and to quickly load it back. Create a semantic validator Most probably your language allows for syntactically correct constructions that may make no sense in certain contexts. An example is a duplicate declaration of the same variable or passing a parameter of a wrong type. The validator will detect such errors looking at the tree. The validator will also resolve references to other modules written in your language, load these other modules and use in the validation process. For instance, this step will make sure that the number of parameters passed to a function from another module is correct. Again, write and run a lot of test cases. Trivial cases are as indispensable at troubleshooting as smart and complex. Generate code Use the simplest techniques you know. Often it's OK to directly translate a language construct (like an if statement) to a lightly-parametrized code template, not unlike an HTML template. Again, ignore efficiency and concentrate on correctness. Target a platform-independent low-level VM I suppose that you ignore low-level stuff unless you're keenly interested in hardware-specific details. These details are gory and complex. Your options: LLVM: allows for efficient machine code generation, usually for x86 and ARM. CLR: targets .NET, multiplatform; has a good JIT. JVM: targets Java world, quite multiplatform, has a good JIT. Ignore optimization Optimization is hard. Almost always optimization is premature. Generate inefficient but correct code. Implement the whole language before you try to optimize the resulting code. Of course, trivial optimizations are OK to introduce. But avoid any cunning, hairy stuff before your compiler is stable. So what? If all this stuff is not too intimidating for you, please proceed! For a simple language, each of the steps may be simpler than you might think. Seeing a 'Hello world' from a program that your compiler created might be worth the effort.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/165543", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/36782/" ] }
165,649
I was looking at the WPF MVVM framework Caliburn.Micro and read that a lot of standard things are based on naming conventions . For example, automatic binding of properties in the View to properties in the ViewModel. Although this seems to be convenient (removes some boilerplate code), my first instinct reaction is that it isn't completely obvious to a new programmer that will read this code. In other words, the functionality of the application is not completely explained by its own code, but also by the documentation of the framework. EDIT: So this approach is called convention over configuration. Since I could not find any questions concerning this, I altered my question: My question is: Is convention over configuration a correct way of simplifying things, or is it violating some programming principles (and if so, which ones)?
I don't consider "an application should be fully explained by its own code" a fundamental programming principle. There are lots and lots of things which are not explained by just looking at the code of an application. Apart from knowing the basic things of the programming language itself (syntax and semantics), you need to know the conventions. If an identifier in Java starts with a capital letter, it is a type. There are lots of these conventions you need to know. Convention over configuration is about reducing the amount of decisions the programmer has to make about things. For some things this is obvious -- nobody would consider having a language where the capitalization of types is something you need to declare at the top of your program -- but for other things it is not so obvious. Balancing convention and configuration is a difficult task. Too much convention can make code confusing (take Perl's implicit variables, for example). Too much freedom on the programmer's side can make systems difficult to understand, since the knowledge gained from one system is rarely useful when studying another. A good example of where convention aids the programmer is when writing Eclipse plugins. When looking at a plugin I've never seen, I immediately know many things about it. The list of dependencies is in MANIFEST.MF, the extension points are in plugin.xml, the source code is under "src", and so on. If these things were up to the programmer to define, every single Eclipse plugin would be different, and code navigation would be much more difficult.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/165649", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/35590/" ] }
165,725
I am currently learning to use Git by reading Pro Git . Right now I'm learning about branching and tags. My question is when should I use a branch and when should I use a tag? For example, say I create a branch for version 1.1 of a project. When I finish and release this version, should I leave the branch to mark the release version? Or should I add a tag? If I add a tag, should I delete the version branch (assuming that it is merged into master or some other branch)?
In short: Best practice is branch out, merge often and keep always in sync . There are pretty clear conventions about keeping your code in a separate branches from master branch: You are about to make an implementation of major or disruptive change You are about to make some changes that might not be used You want to experiment on something that you are not sure it will work When you are told to branch out, others might have something they need to do in master Rule of thumb is after branching out, you should keep in sync with the master branch. Because eventually you need to merge it back to master. In order to avoid a huge complicated mess of conflicts when merging back, you should commit often, merge often. Good practices to follow A successful Git branching model by Vincent Driessen has good suggestions. If this branching model appeals to you consider the flow extension to git . Others have commented about flow . Tagging practices As you already know, Git gives you commit identifiers like 1.0-2-g1ab3183 but those are not tags! Tagging is done with git tag, and the tags that are created using git tag are the base for the commit identifiers git describe creates. In another words, in Git you don't tag branches. You are tagging commits. It is correct to say that tag is just an annotated pointer to a commit. Lets look at practical example that demonstrated it, /-- [v1.0] v ---.---.---.---S---.---A <-- master \ \-.---B <-- test Let's commit 'S' be commit pointed by tag 'v1.0'. This commit is both on branch 'master' and on branch 'test'. If you run " git describe " on top of commit 'A' (top of 'master' branch) you would get something like v1.0-2-g9c116e9 . If you run "git describe" on top of commit 'B' ( aka the 'test' branch) you would get something like v1.0-2-g3f55e41 , that is the case with default git-describe configuration. Note that this result is slightly different. v1.0-2-g9c116e9 means that we are at commit with sortened SHA-1 id of 9c116e9 , 2 commits after tag v1.0 . There is no tag v1.0-2 ! If you want your tag to appear only on branch 'master', you can create new commit (e.g. only update default / fallback version information in GIT-VERSION-FILE) after branching point of 'test' branch. If you tag commits on 'test' branch with e.g. 'v1.0.3` it would be visible only from 'test'. References I have found many, many, useful blogs and posts to learn from. However, the ones that are professionally illustrated are rare ones. Thus, I would like to recommend a post - A successful Git branching model by @nvie. I have borrowed his illustration :)
{ "source": [ "https://softwareengineering.stackexchange.com/questions/165725", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/61509/" ] }
165,740
In the academic sense, do regular expressions qualify as a programming language? The motivation for my curiosity is an SO question I just looked at which asked "can regex do X?" and it made me wonder what can be said in the generic sense about the possible solutions using them. I am basically asking, "are regular expressions Turing complete"?
Regular Expressions are a particular kind of formal grammar used to parse strings and other textual information that are known as "Regular Languages" in formal language theory. They are not a programming language as such. They are more of a shorthand for coding that would otherwise be extremely tedious to implement and even more confusing than the sometimes arcane looking Regex. Programming Languages are typically defined as languages that are Turing Complete . Such languages must be able to process any computable function . Regex does not fit into this category. If you want a language that looks like Regex, try J.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/165740", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/20108/" ] }
165,763
I know that it's been proven that a coding standard helps enormously. However, there are many different tools and IDEs that will format to whatever standard the programmer prefers. So long as the code's neat/commented (and not a spaghetti mess), I don't see the need for a coding standard. Are there any arguments for the development of a coding standard (we don't have one, but I was looking into creating one)?
Coding standards are not just about the favored parameters for indent -- they also include naming conventions, commenting conventions, and a large number of possible recommendations for idioms, language feature use, etc. More to the point, you still need to document all this somewhere. And finally, not everyone will want to use an IDE that reformats code that way...
{ "source": [ "https://softwareengineering.stackexchange.com/questions/165763", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/54164/" ] }
165,899
When teaching recently about the Big vs. Little Endian battle, a student asked whether it had been settled, and I realized I didn't know. Looking at the Wikipedia article , it seems that the most popular current OS/architecture pairs use Little Endian but that Internet Protocol specifies Big Endian for transferring numeric values in packet headers. Would that be a good summary of the current status? Do current network cards or CPUs provide hardware support for switching byte order?
I'd argue that it's not so much won as ceased to matter. ARM which makes up basically all of the mobile market is bi-endian (oh, the heresy!). In the sense that x86 basically "won" the desktop market I suppose you could say that little endian won but I think given the overall code depth (shallow) and abstraction (lots) of many of today's applications, it's much less of an issue than it used to be. I don't recall endianness really coming up in my Computer Architecture class. I suspect that many developers aren't even aware of endianness or why it's important. Because for the vast (and I mean vast ) majority it's utterly irrelevant to their daily working environment. This was different 30 years ago when everyone was coding much closer to the metal as opposed to manipulating text files on a screen in fancy and dramatic ways. My general suspicion is that Object Oriented Programming was the beginning of the end of caring about endianness since the layers of access and abstraction in a good OO system hide implementation details from the user. Since implementation includes endianness, people got used to it not being an explicit factor. Addendum: zxcdw mentioned portability being concern. However, what has arisen with a vengeance in the last 20 years? Programming Languages built on virtual machines. Sure the virtual machine endianness might matter but it can be made very consistent for that one language to the point where it's basically a non-issue. Only the VM implementors would even have to worry about endianness from a portability standpoint.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/165899", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/55819/" ] }
165,971
I have read Principles for the Agile Architect , where they defined next principles : Principle #1 The teams that code the system design the system. Principle #2 Build the simplest architecture that can possibly work. Principle #3 When in doubt, code it out. Principle #4 They build it, they test it. Principle #5 The bigger the system, the longer the runway. Principle #6 System architecture is a role collaboration. Principle #7 There is no monopoly on innovation. The paper says that most of the architecture design is done during the coding phase, and only system design before that. That is fine. So, how is the system design done? Using UML? Or a document that defines interfaces and major blocks? Maybe something else?
Disclaimer: I am an architect in an agile environment but, as Helmuth von Moltke the Elder says, "No battle plan survives contact with the enemy". In other words, practicalities mean that the exact letter of the guidelines cannot always be followed. Most of the points raised above are followed as best the team can. However, principle 1 (The teams that code the system design the system) is really hard to follow when the team consists of tens (or hundreds) of developers split across different continents and time zones . This is nothing to do with the developers' skills or attitudes, more the logistical problem of them all being present to gather requirements from customers and understand existing complex systems. So, how is the system design done? Using UML? Or a document that defines interfaces and major blocks? Maybe something else? Often the architect identifies the major components then defines the interfaces between them (including nonfunctional requirements like security, speed and reliability) and delegates the internal design of the components to individual teams . This is a good compromise between letting the teams design their own components without requiring everyone to know everything about the system. Every organization has its own set of standards for architectural designs and this sometimes varies from project to project within the organization. This design done before the team starts coding or as early as possible and usually contains (and is not a complete list): Expanded requirements and scope definition. These include use cases or user stories that flesh out the higher level business requirements. I personally like to use RFC 2119 for non-functional requirements. Design is based on and traced back to these. Although it may not fit the common definition of design, these are often just as important. An overview consisting of a high level network or component diagram and a page of text. This is for a very wide audience, from upper management down to dev and QA. This rarely uses UML or a defined notation due to the wide audience. Details for individual components, often focusing on the interfaces or APIs between them as mentioned above. Interfaces may be specified as method signatures in the target language with precondition and postcondition details. Components may have network diagrams, such as showing the layout of VMs in a cloud or data center and their networking arrangements. Relational databases will usually have Entity-Relationship diagrams. A list of architectural risks and their mitigations, if known. Like requirements, these demonstrate design decisions and trade-offs. In short, the design of a system in an agile process is exactly the same as one in a traditional waterfall process. However, in agile environments, less of the design is done upfront and more of it is delegated to component teams . The key is determining how deep to go initially, which decisions to defer and identifying when decisions need to be made. Decisions that impact multiple development teams should be made earlier, especially scalability and security. Decisions like adding additional languages to an already internationalized product can be deferred until very late. After the initial design is created, the architect works with each of the teams and reviews their designs. If additional design or design changes are required for a unit of work (such as a scrum sprint), the architect aims to have it available by the time that unit of work starts. The architect is also responsible for communicating any changes to affected teams or stakeholders.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/165971", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/20065/" ] }
166,037
As I've become a better developer, I find that much of my design skill comes more from intuition than mechanical analysis. This is great. It lets me read code and get a feel for it quicker. It lets me translate designs between languages and abstractions much easier. And it lets me get stuff done faster. The downside is that I find it harder to explain to teammates (and worse, management) why a particular design is advantageous; especially teammates that are behind the times on best practices. "This design is more testable!" or "You should favor composition over inheritance." go right over their heads, and lead into the rabbit hole of me trying to clue everyone in to the last decade of software engineering advances. I'll get better at it with practice of course, but in the mean time it involves a lot of wasted time and/or bad design (that will lead to wasted time fixing it later). How can I better explain why a certain design is superior, when the benefits aren't completely obvious to the audience?
This may not directly answer your question, but it might lead you in an interesting direction. I think what you need to do is more related to selling them on the idea than explaining it to them. Sales is all about understanding what the customer's problem is and then showing them how your product (or development method, whatever) will benefit them. Each person has different needs, so those things that benefit one person and gets them excited may very well leave another person cold. For your CEO time-to-market may be the key, for your manager it might be more predictable schedules, for your programming colleagues it might be faster coding (or easier testing / documentation / debugging / whatever), and for your company's customers it might be ... ? Sales does not happen automagically -- you have to make a concerted effort to understand the other person's point of view, and then figure out how to map your ideas into their Happy Place™. Once they know how they will personally benefit from this new thing, they often ask, "How much will it cost and how soon can we do it?" Once you hear those magic words you know you have done your sales job well.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/166037", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/51654/" ] }
166,039
Possible Duplicate: Defensive Programming vs Exception Handling? if/else statements or exceptions I often come across heated blog posts where the author uses the argument: "exceptions vs explicit error checking" to advocate his/her preferred language over some other language. The general consensus seems to be that languages that make use of exceptions are inherently better / cleaner than languages which rely heavily on error checking through explicit function calls. Is the use of exceptions considered better programming practice than explicit error checking, and if so, why?
In my mind, the biggest argument is the difference in what happens when the programmer makes an error. Forgetting to handle an error is a very common and easy mistake to make. If you return error codes, it is possible to silently ignore an error. For example, if malloc fails, it returns NULL and sets the global errno . So correct code should do void* myptr = malloc(1024); if (myptr == NULL) { perror("malloc"); exit(1); } doSomethingWith(myptr); But it is very easy and expedient to instead only write: void* myptr = malloc(1024); doSomethingWith(myptr); which will unexpectedly pass NULL into your other procedure and likely discard the errno which was carefully set. There is nothing visibly wrong with the code to indicate that this is possible. In a language that uses exceptions, instead you would write MyCoolObject obj = new MyCoolObject(); doSomethingWith(obj); In this (Java) example, the new operator either returns a valid initialized object or throws OutOfMemoryError . If a programmer must handle this, they can catch it. In the usual (and conveniently, also the lazy) case where it is a fatal error, the exception propagation terminates the program in a relatively clean and explicit manner. That is one reason why exceptions, when properly used, can make writing clear and safe code much easier. This pattern applies to many, many things which can go wrong, not just allocating memory.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/166039", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/33675/" ] }
166,215
Possible Duplicate: What “version naming convention” do you use? Do you change your major/minor/patch version numbers right before you release or right after? Example: You just released 1.0.0 to the world (huzzah!). But wait, don't celebrate too much. 1.1.0 is coming out in six weeks! So you fix a bug and do a new build. What's that build called? 1.1.0.0 or 1.0.0.xxxy (where xxxy is the build number of 1.0.0 incremented)? Keep in mind you may have 100 features and bugs to go into 1.1.0. So it might be good to call it 1.0.0.xxxy, because you're nowhere close to 1.1.0. But on the other hand, another dev may be working on 2.0.0, in which case your build might be better named 1.1.0.0 and his 2.0.0.0 instead of 1.0.0.xxxy and 1.0.0.xxxz, respectively.
After you release your software, the version number should be incremented immediately. Why? Let's assume you're following a scheme like Semantic Versioning , and you have a build number in the version. So you might have [Major].[Minor].[Patch].[Build]. I am going to call the [Major].[Minor].[Patch] part the version. You will be creating multiple builds during development. Each build is a development snapshot of your next release. It makes sense to use the same version for your development and release builds. The version indicates what release you're working toward . If you're preparing for release and the software passes all of its tests, you won't want to rebuild and retest the software just because you had to update the version. When you eventually make a release, you are stating that "build 1.1.0.23" shall henceforth be referred to as "version 1.1.0". The increment-after-release model makes sense for branching too. Suppose you have a mainline development branch, and you create maintenance branches for releases. The moment you create your release branch, your development branch is no longer linked to that release's version number. The development branch contains code that is part of the next release, so the version should reflect that.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/166215", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/2322/" ] }
166,273
What are the advantages of using Javascript-only versus using JQuery-only? I have limited experience with JavaScript and JQuery coding. I've added bits and snippets of each to HTML pages but I've mostly coded server-side stuff in other languages. I've noticed that while you can theoretically do the same things using either of the two approaches (and of course you can even mix 'em up in the same project) there seems to be a tendency to always start using JQuery from the very beginning no-matter what the project demands are. So I'm simply wondering, are there any punctual benefits to not use JQuery-only but instead to just use plain old JavaScript? I know this looks like a non-question because it can be said about it that "there's no definite answer" or "it can be debated for ever", but I'm actually hoping for punctual answers such as "You can do this in one approach and you cannot do it with the other". As per scrwtp's comment, I'm not referring just to the DOM Handling part. My question is rather: JQuery is a library. For Javascript. What I find strange about this library as opposed to other libraries for other languages is that in JQyery's case it seems to be designed to be able to use it exclusively and not need to touch Javascript directly. This is as opposed to let's say Hibernate and SQL, where even though the library (or rather framework in this case, but I think the analogy still applies) takes the handle on A LOT of aspects, you still get to use SQL when using it, at least for some fringe cases. However in JQuery & Javascript case, you could do anything you do with Javascript using only JQuery (or at least that's how it seems to me). As per Stargazer712's comment: yes, I agree with you, the question here is, as you put it "just a matter of how you will be using JavaScript". That's what I was actually thriving to ask, but I've made some bad formulations. Here's another analogy: Spring Expression Language . It's a Java library. You can't use it without Java, it's based on Java, and down through it all you still get to use Java. But in practice what you can do is add this library to a Java project, and then write all your code using Spring EL's expression language which effectively makes your code not resemble Java at all, and it's even paradigm shifting (for example you no longer have strong type enforcement when using this). While I do understand that JQuery is just a JS library, to me it seems that in practice it has the same effect as Spring EL has with Java, i.e you can only use it's APIs through out a project and avoid JavaScript's APIs. And I was wondering if that's a good thing to do, what might be the pitfalls etc. (and yes, after reading everyone's answers I do understand that : a. my question is somewhat non-sensical up to a point b. even if the question were completly accurate, the answer to it would be pretty much "no you can't just use JQuery-only all the time)
First off - it's impossible to use jQuery only, all jQuery does is add a $ object to your global scope, with a bunch of methods in it. Even more manipulative libraries like prototype aren't an alternative to javascript, they're a toolbelt to solve common problems. The main advantages to adding jQuery to your toolbelt would be: browser compatibility - doing something like .attr() is much easier than the native alternatives, and won't break across browsers. simplification of usually complicated operations - if you'd like to see a well written cross browser compatible version of an XHR method, take a look at the source for $.ajax - for this method alone it's almost worth the overhead of jQ. DOM selection - simple things like binding events & selecting DOM elements can be complicated and differ per-browser. Without a lot of knowledge, they can also be easily written poorly and slow down your page. Access to future features - things like .indexOf and .bind are native javascript, but not yet supported by many browsers. However, using the jQuery versions of these methods will allow you to support them cross browser. Javascript is no longer just a client side language, and because jQuery is so DOM dependent, it is a terrible candidate to move to the server. I highly recommend putting some time into understanding why you are using jQuery (asking this question is a great first step!), and evaluating when it is necessary. jQuery can be dangerous, a few of the main dangers are: code quality - jQuery has a huge community and a low learning curve. This is a perfect storm for lots of poorly written open source plugins. inefficiency - jQuery is easy to write inefficiently. For instance, using jQuery's each instead of for loops is unnecessary and could have a performance impact in some cases. Lots of good info about this stuff at JSPerf bloat - jQuery is a huge library. Much of the time, you'll use a small subset of it's features, and grab the whole library. There are some great alternatives that will give you subsets of the features, like zepto.js, and underscore.js - depending on your situation, you can save some bytes by choosing the right library for your needs. Ultimately, jQuery is an incredibly useful and helpful library, when used properly. However, it is not an alternative to javascript. It is a library, just like zepto.js , YUI , Dojo , MooTools , and Prototype - one of which may be a much better choice for your current project. Javascript is a misunderstood language, and is only recently being regarded as something more than a scripting language by most people. I really recommend reading up on it more, here's a few good places to start: Edit 07/2014 - I noticed this post is still getting attention, so I added a bunch of links. These are in no particular order, but should be helpful. Ben Alman's blog - lots of good best practices here. I don't agree with all of them, but I learn new things from his blog all the time. Code Academy - basic javascript and jQuery training. Sometimes going back to basics helps. Javascript Garden - a post regarding the more tricky or misunderstood features of javascript. Please read this from time to time, until everything makes sense. Bocoup - these are training classes. If you're near one, go to it. Many of the best JS speakers and teachers teach these. Paul Irish's blog - not strictly JS, but plenty of best practices are written about here. Him and Ben's twitter feeds are both great to follow. Javascript: The Good Parts - often referred to as 'The Javascript Bible', this book by Douglas Crockford is an amazing place to start understanding javascript. Isaac Schlueter's Blog - Isaac is the creator of npm, and works on the node core. He writes a lot about the javascript community rather than about code conventions, but if you're really getting in to js it's a great read. Douglas Crockford's Javascript - If Brendan Eich is the father of javascript, Douglas is javascript's outspoken uncle. He is the author of the JSON spec, the javascript bible, and lots of amazing posts on javascript's quirks and meteoric rise. Brendan Eich's Blog - Brendan is the creator of javascript - he writes about all sorts of silly stuff on his blog, and while he has his faults as a person, his javascript posts are valuable. James Halliday's (@substack) Blog - Substack is arguably the most important node.js developer in the community - with around 400 (and growing every day) npm modules and a guiding philosophy of tiny, unix-like modules, everything he writes is worth reading. Max Ogden's Blog Max Ogden is another prolific node.js author, and is excellent at writing blog posts that teach you something. He's also the author (with others I believe) of javascript for cats. Javascript for Cats - This is a short tutorial that takes you through the basics of javascript from the perspective of a cat. If you're a beginner, read through this. It's fun, and teaches in an hour what many books take weeks to communicate. Nicholas Zakas' Blog Nicholas is the author of a few fantastic javascript books: Object Oriented Programming in Javascript , Maintainable Javascript , Professional Javascript for Web Developers , and High Performance Javascript . He focuses mainly on the client, but has a ton of best practices and performance tips. Guillermo Rauch's Blog - Guillermo is another prolific node.js dev, mostly famous for Socket.io and Mongoose. His blog (and his new book, Smashing Node.js are both great resources. I'm sure there's lots more great resources I'm not thinking of or don't know about, other answerers should feel free to add to that list.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/166273", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/38218/" ] }
166,409
What is the difference between building an application Outside In vs building it Inside Out using TDD? These are the books I read about TDD and unit testing: Test Driven Development: By Example Test-Driven Development: A Practical Guide: A Practical Guide Real-World Solutions for Developing High-Quality PHP Frameworks and Applications Test-Driven Development in Microsoft .NET xUnit Test Patterns: Refactoring Test Code The Art of Unit Testing: With Examples in .Net Growing Object-Oriented Software, Guided by Tests --->This one was really hard to understand since JAVA isn't my primary language :) Almost all of them explained TDD basics and unit testing in general, but with little mention of the different ways the application can be constructed. Another thing I noticed is that most of these books (if not all) ignore the design phase when writing the application. They focus more on writing the test cases quickly and letting the design emerge by itself. However, I came across a paragraph in xUnit Test Patterns that discussed the ways people approach TDD. There are 2 schools out there Outside In vs Inside Out . Sadly the book doesn't elaborate more on this point. I wish to know what is the main difference between these 2 cases. When should I use each one of them? To a TDD beginner which one is easier to grasp? What is the drawbacks of each method? Is there any materials out there that discuss this topic specifically?
Inside-Out and Outside-In are fairly rare terms, more often I have heard/read about Classic school and London school . Inside-Out (Classic school, bottom-up ): you begin at component/class level (inside) and add tests to requirements. As the code evolves (due to refactorings), new collaborators, interactions and other components appear. TDD guides the design completely. Outside-In (London school, top-down or "mockist TDD" as Martin Fowler would call it): you know about the interactions and collaborators upfront (especially those at top levels) and start there (top level), mocking necessary dependencies. With every finished component, you move to the previously mocked collaborators and start with TDD again there, creating actual implementations (which, even though used, were not needed before thanks to abstractions ). Note that outside-in approach goes well with YAGNI principle. Neither of the approaches is the one and only ; they both have their place depending on what you do. In large, enterprise solutions, where parts of the design come from architects (or exists upfront) one might start with "London style" approach. On the other hand, when you face a situation where you're not certain how your code should look (or how it should fit within other parts of your system), it might be easier to start with some low-end component and let it evolve as more tests, refactorings and requirements are introduced. Whichever you use, more often than not it is situational. For further reading, there's Google group post with rather interesting discussion on how this distinction (might have) originated and why London might not be the most appropriate name.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/166409", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/50440/" ] }
166,419
In my program design, I often come to the point where I have to pass object instances through several classes. For example, if I have a controller that loads an audio file, and then passes it to a player, and the player passes it to the playerRunnable, which passes it again somewhere else etc. It looks kind of bad, but I don´t know how to avoid it. Or is it OK to do this? EDIT: Maybe the player example is not the best because I could load the file later, but in other cases that does not work.
As others have mentioned, this isn't necessarily a bad practice, but you should pay attention that you're not breaking the layers' separation of concerns and passing layer-specific instances between layers. For instance: Database objects should never be passed up to higher layers. I've seen programs using .NET's DataAdapter class, a DB-access class, and passing it up to the UI layer, rather than using the DataAdapter in the DAL, creating a DTO or dataset, and passing that up. DB access is the domain of the DAL. UI objects should, of course, be limited to the UI layer. Again, I've seen this violated, both with ListBoxes populated with user data passed up to the BL layer, instead of an array/DTO of its content, and (a particular favorite of mine), a DAL class that retrieved hierarchical data from the DB, and rather than returning a hierarchical data structure, it just created and populated a TreeView object, and passed it back to the UI to be added dynamically to a form. However, if the instances you're passing are the DTOs or entities themselves, it's probably ok.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/166419", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/63220/" ] }
166,454
A colleague and I have recently argued over whether a pure regex is capable of fully encapsulating the csv format, such that it is capable of parsing all files with any given escape char, quote char, and separator char. The regex need not be capable of changing these chars after creation, but it must not fail on any other edge case. I have argued that this is impossible for just a tokenizer. The only regex that might be able to do this is a very complex PCRE style that moves beyond just tokenizing. I am looking for something along the lines of: ... the csv format is a context free grammar and as such, it is impossible to parse with regex alone ... Or am I wrong? Is it possible to parse csv with just a POSIX regex? For example, if both the escape char and the quote char are " , then these two lines are valid csv: """this is a test.""","" "and he said,""What will be, will be."", to which I replied, ""Surely not!""","moving on to the next field here..."
Nice in theory, terrible in practice By CSV I'm going to assume you mean the convention as described in RFC 4180 . While matching basic CSV data is trivial: "data", "more data" Note: BTW, it's a lot more efficient to use a .split('/n').split('"') function for very simple and well-structured data like this. Regular Expressions work as a NDFSM (Non-Deterministic Finite State Machine) that wastes a lot of time backtracking once you start adding edge cases like escape chars. For example here's the most comprehensive regular expression matching string I've found: re_valid = r""" # Validate a CSV string having single, double or un-quoted values. ^ # Anchor to start of string. \s* # Allow whitespace before value. (?: # Group for value alternatives. '[^'\\]*(?:\\[\S\s][^'\\]*)*' # Either Single quoted string, | "[^"\\]*(?:\\[\S\s][^"\\]*)*" # or Double quoted string, | [^,'"\s\\]*(?:\s+[^,'"\s\\]+)* # or Non-comma, non-quote stuff. ) # End group of value alternatives. \s* # Allow whitespace after value. (?: # Zero or more additional values , # Values separated by a comma. \s* # Allow whitespace before value. (?: # Group for value alternatives. '[^'\\]*(?:\\[\S\s][^'\\]*)*' # Either Single quoted string, | "[^"\\]*(?:\\[\S\s][^"\\]*)*" # or Double quoted string, | [^,'"\s\\]*(?:\s+[^,'"\s\\]+)* # or Non-comma, non-quote stuff. ) # End group of value alternatives. \s* # Allow whitespace after value. )* # Zero or more additional values $ # Anchor to end of string. """ It reasonably handles single and double quoted values, but not newlines in values, escaped quotes, etc. Source: Stack Overflow - How can I parse a string with JavaScript It's becomes a nightmare once the common edge-cases are introduced like... "such as ""escaped""","data" "values that contain /n newline chars","" "escaped, commas, like",",these" "un-delimited data like", this "","empty values" "empty trailing values", // <- this is completely valid // <- trailing newline, may or may not be included The newline-as-value edge case alone is enough to break 99.9999% of the RegEx based parsers found in the wild. The only 'reasonable' alternative is to use RegEx matching for basic control/non-control character (ie terminal vs non-terminal) tokenization paired with a state machine used for higher level analysis. Source: Experience otherwise known as extensive pain and suffering. I am the author of jquery-CSV , the only javascript based, fully RFC-compliant, CSV parser in the world. I have spent months tackling this problem, speaking with many intelligent people, and trying a ton if different implementations including 3 full rewrites of the core parser engine. tl;dr - Moral of the story, PCRE alone sucks for parsing anything but the most simple and strict regular (Ie Type-III) grammars. Albeit, it's useful for tokenizing terminal and non-terminal strings.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/166454", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/27114/" ] }
166,461
I notice that someone in my organization programs comparisons like: if (100 == myVariable) rather than: if (myVariable == 100) He claims the former is quicker in languages like C++. I can't find any evidence. We program in C#. Is this true for any programming language?
No, it is not "quicker": compilers will translate both expressions into the same code. Some time ago the first pattern has been suggested to people coming to C from other languages where comparing objects required a single = . The idea was to protect them from making this mistake: if (myVariable = 100) This is legal, but it assigns 100 to myVariable instead of comparing myVariable to 100 . If you make it a habit to put 100 ahead of myVariable , the compiler will trigger an error, because if (100 = myVariable) is illegal. Modern compilers issue warnings when they see an assignment in place of an equality check == . You can silence the warning in cases when you do want to use an assignment inside an if by adding a second set of parentheses around your assignment expression. Moreover, the construct is not useful in C# at all, because if (myVariable = 100) is not legal.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/166461", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/49882/" ] }
166,539
I may not be able to give the right title to the question. But here it is, We are developing financial portal for wealth management. We are expecting over 10000 clients to use the application. The portal calculates various performance analytics based on the the technical analysis of the stock market. We developed lot of the functionality through Stored procedures, user defined functions, triggers etc. through Database. We thought we can gain huge performance boost doing stuff directly in database than through C# code. And we actually did get a huge performance boost. When I tried to brag about the achievement to our CTO, he counter questioned my decision of having functionality implemented in database rather than code. According to him such applications suffer scalability problems. In his words "These days things are kept in memory/cache. Clustered data is hard to manage over time. Facebook, Google have nothing in database. It is the era of thin servers and thick clients. DB is used only to store plain data and functionality should be completely decoupled from the database." Can you guys please give me some suggestions as to whether what he says is right. How to go about architect such an application?
In short, I would agree with your CTO. You've probably gained some performance at the expense of scalability (if those terms are confusing, I'll clarify below). My two biggest worries would be maintainability and lack of options to scale horizontally (assuming you are going to need that). Proximity to data: Let's take a step back. There are some good reasons for pushing code into a DB. I would argue that the biggest one would be proximity to the data - for example, if you are expecting a calculation to return a handful of values, but these are aggregations of millions of records, sending the millions of records (on-demand) over the network to be aggregated elsewhere is hugely wasteful, and could kill easily your system. Having said this, you could achieve this proximity of data in other ways, essentially using caches or analysis DBs where some of the aggregation is done upfront. Performance of code in the DB: Secondary performance effects, such as "caching of execution plans" are more difficult to argue. Sometimes, cached execution plans can be a very negative thing, if the wrong execution plan was cached. Depending on your RDBMS, you may get the most out of these, but you won't get much over parametrised SQL, in most cases (those plans typically get cached, too). I would also argue that most compiled or JIT'ed languages typically perform better than their SQL equivalents (such as T-SQL or PL/SQL) for basic operations and non-relational programming (string manipulation, loops, etc), so you wouldn't be losing anything there, if you used something like Java or C# to do the number crunching. Fine-grained optimisation is also pretty difficult - on the DB, you're often stuck with a generic B-tree (index) as your only data structure. To be fair, a full analysis, including things like having longer-running transactions, lock escalation, etc, could fill books. Maintainability: SQL is a wonderful language for what it was designed to do. I'm not sure it's a great fit for application logic. Most of the tooling and practices that make our lives bearable (TDD, refactoring, etc) are difficult to apply to database programming. Performance versus scalability: To clarify these terms, I mean this: performance is how quick you'd expect a single request to go through your system (and back to the user), for the moment assuming low load. This will often be limited by things like the number of physical layers it goes through, how well optimised those layers are, etc. Scalability is how performance changes with increasing number of users / load. You may have medium / low performance (say, 5 seconds+ for a request), but awesome scalability (able to support millions of users). In your case, you will probably experience good performance, but your scalability will be bounded by how big a server your can physically build. At some point, you will hit that limit, and be forced to turn to things like sharding, which may not be feasible depending on the nature of the application. Premature Optimisation: Ultimately, I think you've made the mistake of optimising prematurely. As others have pointed out, you don't really have measurements showing how the other approaches would work. Well, we can't always build full-scale prototypes to prove or disprove a theory... But in general, I'd always be hesitant to chose an approach which trades maintainability (probably the most important quality of an application) for performance. EDIT: On a positive note, vertical scaling can stretch quite far in some cases. As far as I know, SO ran on a single server for quite some time. I'm not sure how it matches up to your 10 000 users (I guess it would depend on the nature of what they are doing in your system), but it gives you an idea of what can be done (actually, there are far more impressive examples, this just happens to be a popular one people can easily understand). EDIT 2: To clarify and comment on a few things raised elsewhere: Re: Atomic consistency - ACID consistency may well be a requirement of the system. The above doesn't really argue against that, and you should realise that ACID consistency doesn't require you to run all your business logic inside the DB. By moving code which does not need to be there into the DB, you're constraining it to run in the physical environment of the rest of the DB - it's competing for the same hardware resources as the actual data management portion of your DB. As for scaling only the code out to other DB servers (but not the actual data) - sure, this may be possible , but what exactly are you gaining here, apart from additional licensing costs in most cases? Keep things that don't need to be on the DB, off the DB. Re: SQL / C# performance - since this seems to be a topic of interest, let's add a bit to the discussion. You can certainly run native / Java / C# code inside DBs, but as far as I know, that's not what was being discussed here - we're comparing implementing typical application code in something like T-SQL versus something like C#. There a number of problems which have been difficult to solve with relational code in the past - e.g. consider the "maximum concurrent logins" problem, where you have records indicating a login or logout, and the time, and you need to work out what the maximum number of users logged in at any one time was. The simplest possible solution is to iterate through the records and keep incrementing / decrementing a counter as you encounter logins / logouts, and keeping track of the maximum of this value. It turns out that unless your DB supports a certain sliding window aggregation (which SQL 2008 didn't, 2012 may , I don't know), the best you can do is a CURSOR (the purely relational solutions are all on different orders of complexity, and attempting to solve it using a while loop results in worse performance). In this case, yes, the C# solution is actually faster than what you can achieve in T-SQL, period. That may seem far-fetched, but this problem can easily manifest itself in financial systems, if you are working with rows representing relative changes, and need to calculate windowed aggregations on those. Stored proc invocations also tend to be more expensive - invoke a trivial SP a million times and see how that compares to calling a C# function. I hinted at a few other examples above - I haven't yet encountered anyone implement a proper hash table in T-SQL (one which actually gives some benefits), while it is pretty easy to do in C#. Again, there are things that DBs are awesome at, and things that they're not so awesome at. Just like I wouldn't want to be doing JOINs, SUMs and GROUP BYs in C#, I don't want to be writing anything particularly CPU intensive in T-SQL.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/166539", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/42267/" ] }
166,558
I have an IT background and was pretty confident until an opportunity came up at work to go into programming(C#). I have never programmed before this, and the software I am programming for is a program I have never used before (a 3D modeling software). It has been 6 months since then and I feel like giving up. I didn't get much training... about 3 weeks of training spread out over the last 6 months. I think I would be good at programming but this experience is making me rethink my decision. I'm not sure if it's just me, or if this frustration is normal. How can I figure out if programming is right for me?
There's a few things to note about getting into programming. First off, you will never know everything about programming. You'll probably never even come close to knowing a fraction of everything. And if you ever get to thinking you know something, something new will come out and what you know will be obsolete. So, you need to be OK with constantly learning new things, and teaching yourself what needs to be done. If you're not OK with spending a lot of time doing a lot of learning, doing research, and figuring things out through "educated trial and error", don't get into Programming. Second, its the logic that matters, not the syntax. Just learning a language, framework, or technology does not necessarily make a good programmer. You really need to have the sort of mind that is capable of understanding the logic behind the code - how the pieces fit together, what kind of logic is getting used, and how the computer will interpret your code. It sounds like you're working with a single piece of software and language, but keep in mind there are many more languages and technologies out there. Don't judge them all by your experience with one of them. If the syntax is frustrating you, then keep in mind there are always other options. But if you're having problems grasping the logic behind the code, then perhaps programming may not be for you. And last of all, don't pick a job you hate. Sure programming can be frustrating, but it can also be very rewarding. If you can handle the times when you want to bang your head against the wall over some bit of code, or delete everything off your computer in frustration, and still enjoy the coding, you're good :)
{ "source": [ "https://softwareengineering.stackexchange.com/questions/166558", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/66414/" ] }
166,612
They both serve the same purpose: Providing a vocabulary for semantic markup. Schema is recognized and standardized… but the microformats standard is by an open community process. Schema exploits microdata in documentation, while microformats go on classes. (Of note: microdata means that an element must be of a single itemtype , while microformats allow several classes to apply to the same element. I can markup xFolk+hAtom with classes, but not with microdata.) Is this a black-and-white situation? Google says I can't use both "because it may confuse the parser". What's the consensus on these?
tl;dr , the three ways how to semantically annotate content in HTML5 documents : Microdata and RDFa are syntaxes (extending HTML) for semantically marking up content, but they don‘t supply vocabularies. Microformats is a convention (re-using what HTML provides) for semantically marking up content, and (solely!) supplies vocabularies for that purpose. Schema.org is a collection of vocabularies (that can be used with various syntaxes, including Microdata and RDFa, but not Microformats), so this question should be: Microdata vs. Microformats? And why not invite RDFa to the party? RDFa and Microdata are not the same, but conceptional similar . Microformats however differs strongly from both. If your only aim would be to enhance the display of search results from search engines, it doesn’t matter which markup way you choose (as long as it is supported by the search engine). But "semantic markup", of course, allows much more: building the Semantic Web . Not without reason do Microformats relate to the term "lower-case semantic web", while RDFa relates to "upper-case Semantic Web" (Microdata is a newer syntax, but it would fit into the upper-case variant). The main difference: extensibility . RDFa and Microdata use URIs, Microformats uses specific class names (for HTML’s class attribute) and link types (for HTML’s rel attribute). That means: With Microformats you can only markup certain content if the Microformats community created and accepted an appropriate "vocabulary" (i.e., a Microformat). With RDFa and Microdata you can create your own vocabulary if there doesn’t already exist an appropriate one (and there are many vocabularies ). Google says I can't use both "because it may confuse the parser". I wouldn't let this stop me from implementing several markup ways. Also, Google kind of revoked this statement in a chat. Update: On Google’s Structured Data documentation , it nowhere says they couldn’t handle different syntaxes on the same document. And their Testing Tool reports no errors if several syntaxes are used. See also related questions on Stack Overflow: microformats, rdf or microdata The relationship between RDF, RDFa, Microformats and Microdata RDF and microdata future Microdata vs RDFa
{ "source": [ "https://softwareengineering.stackexchange.com/questions/166612", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/66455/" ] }
166,635
I've worked in a few shops where management has passed the idea of pair programming either to me or another manager/developer, and I can't get behind it at all. From a developer stand-point I can't find a reason why moving to this coding style would be beneficial, nor as a manager of a small team have I seen any benefit. I understand that it helps on basic syntax errors and can be helpful if you need to hash something out, but managers that are out of the programming loop seem to keep seeing it as a way of keeping their designers from going to Facebook or Reddit than as a design tool. As someone close to the development floor that apparently can't quite understand from a book tossed my way or a wiki page on the subject... from a high level management position, what are the benefits of Pair Programming when dealing with Scrum or Agile environments?
Partly, it depends on how you are doing pair programming. In some instances, the driver of the pair is writing code, while the second member of the pair is observing and discussing the design and implementation details of the system. Another instance of pair programming involves both people writing code simultaneously - one person is writing the implemented functionality and the other is actively developing and writing test code at the unit and integration level, again discussing the design and implementation details of the system. Regardless of the type of pair programming, it effectively serves as a continuous code review . You have two people's eyes on the code, watching for errors before they escape into a later system/acceptance testing environment or the field. You also have two people who understand a particular part of the system very well, to serve as a redundancy to minimize your bus factor . Both catching defects early and spreading system knowledge around the team reduces the cost of building a system. The spreading of knowledge isn't just limited to the technical knowledge of the team, either. Depending on who the pair is, it can allow for information to flow between a more senior member of the company to a new member about other things that transcend the project - coding style, company culture, expectations, and so on. It can also allow someone who is more familiar with a technology or tool to share their knowledge in that technology or tool in a real-world applied setting. As you mentioned, it also does help keep developers focused and in flow . In addition to flow, many individuals are less likely to interrupt multiple people working on something than a single individual working on something. If you walk by someone's desk and they are working alone, but you need to talk to them, you might knock and talk to them. This is less likely if you see two or more people collaboratively working or having a discussion - you won't interrupt them. Interruptions cost time, and spending more time means higher costs. It's in the best interests of the business to maximize productivity of the employees. However, there are some challenges that must be overcome to make pair programming viable. Consider things like personality clashes or choosing the pairs to properly distribute the knowledge. There's also consideration of exactly when to rotate pairs. Pair programming done haphazardly probably won't be effective as one that's planned out. Depending on the makeup of your team, it might not be effective to pair people at all.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/166635", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/20349/" ] }
166,699
Suppose I have a file foo.py containing a class Foo : class Foo(object): def __init__(self, data): ... Now I want to add a function that creates a Foo object in a certain way from raw source data. Should I put it as a static method in Foo or as another separate function? class Foo(object): def __init__(self, data): ... # option 1: @staticmethod def fromSourceData(sourceData): return Foo(processData(sourceData)) # option 2: def makeFoo(sourceData): return Foo(processData(sourceData)) I don't know whether it's more important to be convenient for users: foo1 = foo.makeFoo(sourceData) or whether it's more important to maintain clear coupling between the method and the class: foo1 = foo.Foo.fromSourceData(sourceData)
The choice should between a factory function and a third option instead; the class method: class Foo(object): def __init__(self, data): ... @classmethod def fromSourceData(klass, sourceData): return klass(processData(sourceData)) Classmethod factories have the added advantage that they are inheritable. You can now create a subclass of Foo , and the inherited factory method would then produce instances of that subclass. With a choice between a function and a class method, I'd choose the class method. It documents quite clearly what kind of object the factory is going to produce, without having to make this explicit in the function name. This becomes much clearer when you not only have multiple factory methods, but also multiple classes. Compare the following two alternatives: foo.Foo.fromFrobnar(...) foo.Foo.fromSpamNEggs(...) foo.Foo.someComputedVersion() foo.Bar.fromFrobnar(...) foo.Bar.someComputedVersion() vs. foo.createFooFromFrobnar() foo.createFooFromSpamNEggs() foo.createSomeComputedVersionFoo() foo.createBarFromFrobnar() foo.createSomeComputedVersionBar() Even better, your end-user could import just the Foo class and use it for the various ways you can create it, as you have all the factory methods in one place: from foo import Foo Foo() Foo.fromFrobnar(...) Foo.fromSpamNEggs(...) Foo.someComputedVersion() The stdlib datetime module uses class method factories extensively, and it's API is much the clearer for it.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/166699", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/20766/" ] }
166,716
I'm having trouble finding a feature comparison between Visual Studio 2012 Express Edition and the professional edition. I'm using the trial Professional version at the moment, but it'll run out soon, so I need to make a decision whether to purchase the full version. Obviously, I can just try both initially and see if the Express edition is suitable, but the problem is that there are that many features in Visual Studio, there might be a really useful feature that was missing in the standard edition that I didn't even know existed! Or I didn't spot it was missing until later down the line. I could really do with a feature comparison list like the one for all non-Express editions here . It's a shame that page doesn't include the Express edition.
The biggest difference is that Express editions do not support plugins (No ReSharper, no add-ons). Additionally, the non-express versions are all combined, meaning you don't have to switch back and forth to get features from individual express versions if you have a project that crosses web, desktop, etc. UPDATE 8/6/2015 - If you're looking for a free edition of Visual Studio today , you will most likely be using the Visual Studio Community Edition, which is very different from the Express edtions (better). Community Edition is essentially Professional Edition, but free for Individuals, and DOES support plugins!
{ "source": [ "https://softwareengineering.stackexchange.com/questions/166716", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/600/" ] }
166,812
What benefits are gained from using the command line for software development over using an alternative GUI? Is the command line faster for certain tasks? Are certain tools only available via the command line?
Advantages of the command line over GUI applications include: If you are writing such tools rather than using them, the command line is easy to develop for. Accepting arguments on the command line is trivial and outputing to a text stream is similarly easy. Command line applications can easily be used in batch files or scripts, which is great for automated testing or builds. Most build tools, like make, ant and msbuild provide good support for calling command line tools. The input, output and error streams can easily be redirected, allowing information to be sent or received from files or other applications. This can mean test data can be easily supplied or output captured. There is a simple but standard error return mechanism (return code and the error stream). Detecting an error in a GUI may require UI automation. Use over remote shell or similar connections makes it much easier to perform tasks remotely, such as when a developer is connecting from home or on the road. Remote desktop connections are more prevalent these days but require more bandwidth and a lower latency connection. There are defined standards for help (such as the unix man command or passing -? as an argument). If someone does not know a feature, they can easily see what the tool provides. Similarly, there are defined wildcard syntax standards for specifying multiple files, drawing on likely existing developer knowledge. However, command line applications have these disadvantages compared to GUI applications: Command line applications are harder to learn to use, particularly for those not used to them. This is usually not a problem for software development because developers are often used to command line tools but could be problematic for less technical people involved in development such as junior QA, localization and technical writers. Command line applications require a keyboard and so are impractical for some devices like phones or tablets.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/166812", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/59934/" ] }
166,884
I have a large method which does 3 tasks, each of them can be extracted into a separate function. If I'll make an additional functions for each of that tasks, will it make my code better or worse and why? Obviously, it'll make less lines of code in the main function, but there'll be additional function declarations, so my class will have additional methods, which I believe isn't good, because it'll make the class more complex. Should I do that before I wrote all the code or should I leave it until everything is done and then extract functions?
This is a book I often link to, but here I go again: Robert C. Martin's Clean Code , chapter 3, "Functions". Obviously, it'll make less lines of code in the main function, but there'll be additional function declarations, so my class will have additional methods, which I believe isn't good, because it'll make the class more complex. Do you prefer reading a function with +150 lines, or a function calling 3 +50 line functions? I think I prefer the second option. Yes , it will make your code better in the sense that it will be more "readable". Make functions that perform one and only one thing, they will be easier to maintain and to produce a test case for. Also, a very important thing I learned with the aforementioned book: choose good and precise names for your functions. The more important the function is, the most precise the name should be. Don't worry about the length of the name, if it has to be called FunctionThatDoesThisOneParticularThingOnly , then name it that way. Before performing your refactor, write one or more test cases. Make sure they work. Once you're done with your refactoring, you will be able to launch these test cases to ensure the new code works properly. You can write additional "smaller" tests to ensure your new functions perform well separably. Finally, and this is not contrary to what I've just written, ask yourself if you really need to do this refactoring, check out the answers to " When to refactor ?" (also, search SO questions on "refactoring", there are more and answers are interesting to read) Should I do that before I write all the code or should I leave it until everything is done and then extract functions? If the code is already there and works and you are short on time for the next release, don't touch it. Otherwise, I think one should make small functions whenever possible and as such, refactor whenever some time is available while ensuring that everything works as before (test cases).
{ "source": [ "https://softwareengineering.stackexchange.com/questions/166884", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/66605/" ] }
166,978
We had JavaScript, then we had Flash, then we had Silverlight and then HTML5 ownd them all. So what is the motivation behind TypeScript? What problems are going to be tackled and what improvements do we get with TypeScript? http://www.typescriptlang.org/
Looks to me like it's a statically-typed, class-based language that compiles down to JavaScript. It's a good idea, and one that others have had as well. The advantages should be obvious to anyone who's developed in both statically-typed, class-based languages and in JavaScript: First and foremost, a compiler. Being able to check for obvious correctness problems caused by typos or carelessness prior to deployment is something that most developers take for granted until they have to do web development, and then it's suddenly yanked out from under their feet. Browsers are designed to make the browsing experience pleasant at the expense of proper testing and debugging facilities, and the standard solution, JSLint, is no substitute for a real compiler as it misses some obvious correctness problems and mixes a bunch of style-checker complaints in with its reports. Having a real compiler is a huge step forward. And along similar lines, a type system . Type systems improve your code by making it easier to read (you know exactly what's being passed into a function, and what it can do, just by looking at the parameter list, for example,) and by enforcing a certain degree of correctness at compile time. (If you're expecting a specific object type, passing an integer is an error. JavaScript will let you do that and then it blows up when you try to run it; a compiler with a type system will catch it and report an error for you.) So as we see, the basic idea is a very good one. Having said that, I can't say anything about the language itself because I have no experience with it. But I've used Smart (linked to above) and found it to be an incredibly powerful and useful tool for web development.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/166978", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/42271/" ] }
167,044
I am looking for an expert programmer to help solve a difficult situation. The interviews so far have been surprisingly disappointing. The best candidate so far is a very experienced programmer who has never used version control software. The problem in itself might not be too serious because it is something which can be learned in a short time. But there is a deeper aspect, which worries me: How is it possible to actively develop software for 10-15 years without ever needing version control? Is the fact itself of not looking for a solution to the problem of tracking changes a sign of a wrong attitude to programming?
I worked for about 11 years in companies that didn't use source control. We managed (mostly by commenting changes and keeping code on a central server that could be recovered to any date). We never really asked whether there was a better way. That said, this was also in the days when I had the entire MSDN library in book form on my desk. Yes, there was programming before the internet. I struggle to see how you can spend 10+ years in the industry now without having run into source control. But, I would have some sympathy, I would believe it was possible and I wouldn't reject the candidate on that one detail alone. I would probe and find out how the candidate has managed this. Alternatively, I might question whether my interview process was the problem. In what way was he the best candidate? Are there other modern programming techniques that he doesn't have that I'm just not asking the right questions for? If I were asking the right questions, would another candidate shine? As a final note though, don't be afraid to reject all candidates if you have concerns. It is time consuming to start over, but it's more time-consuming to hire the wrong person.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/167044", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/55105/" ] }
167,070
Microsoft recently unveiled Typescript, a new JavaScript-like programming language. Some time ago, I heard about Dart, a new programming language created by Google to solve problems related to Javascript like performance, scalability, etc.. The purpose of both new languages seem the same to me.. What do you think? Are the purposes of the languages the same? What are the real differences about them?
Quoting Bob Nystrom : TypeScript seems nice if you like JS semantics or have a large JS codebase that you're invested in but you're having maintenance problems at scale. It's path for success is much smoother since it's (mostly?) backwards compatible with JS. Dart is taking a riskier bet. It's farther from JS in a lot of ways which is, I think, mostly good as a day-to-day Dart programmer, but it makes the barrier of entry higher. But in return for that higher barrier of entry, you get: Tree shaking Getters and setters (though I presume TypeScript will get those eventually) Operator overloading Real block scope, no hoisting, no IIFE s A native VM Sane equality semantics No weird implicit conversion craziness Lexically bound this everywhere Mixins Annotations An import system User-defined subscript operators Generics, with reification Mirrors Better collection classes A cleaner DOM API Also, he writes in http://www.reddit.com/r/programming/comments/10rkd9/welcome_to_typescript/c6g37xd : I'm on Google's Dart team, so I'm naturally looking at it from that angle/bias. Here's some random stuff that caught my eye, mostly comparing it to Dart. I've only spent a few minutes skimming, so don't take any of this too seriously... No generics I guess some types are better than no types at all, but it's really rough to lose those. TypeScript does have built-in array types and object types cover some of the "map" type use cases. But not being able to define your own generic types is a drag. The docs say when added, generics will work using type erasure, which is what I'd expect given it's "compile to lightweight JS" style, but that can be a pain too. It's nice to be able to do stuff with your type arguments at runtime sometimes. All types are nullable Dart is the same way. Makes me sad in both cases. The type annotation syntax is nice Almost every language with optional type annotations (ML, Scala, F#, Kotlin, etc.) goes with "postfix after a :. Dart tries to use C-style type annotations which causes some nasty corner cases. I like what TypeScript has here, especially the syntax for function types: function takeCallback(callback : (n : number) => number) { ... } Interfaces are structurally typed, classes are nominally typed Makes sense given that it's JavaScript, but it seems pretty neat. Being able to implicitly implement an interface is nice. But TypeScript doesn't seem to let you go the other way: given a class, you can't make a new type that's compatible with it without concretely extending it because of the brand stuff. In Dart, thanks to implicit interfaces, you can. Best common type can fail That means this is a type error: [1, true] You can overload in interfaces by parameter signature This is really cool because it gives you a way have more precise type inference flow through a function call that does some dynamic type switching. For example: interface Doubler { double(s : string) : string; double(n : number) : number; } With this, when the compiler sees a call to double, it can correctly give you a precise return type based on the inferred argument type. What I'm not sure is how to actually implement a class that implements that interface and makes the type checker happy. You can't actually overload concrete methods, and my five minute attempt to make it happy by dynamic type checking didn't seem to work. There's a dedicated syntax for array types Makes sense since there's no generics. It's also nice and terse, which is good, but I personally prefer general-purpose generics over one-off special case collections. There's no implicit downcasting One of Dart's more unusual type system features is that assignment compatibility is bidirectional: you can downcast without a warning. Aside from the typical special case of assigning to/from any (dynamic in other languages), TypeScript doesn't allow that. You have to type assert. Personally, I like TypeScript's approach here. Arrow functions and lexical this This is just motherhood and apple pie. I like it. (Dart has this too, and this is always lexically bound.) Overall, it looks pretty neat. If you want exactly the same JS semantics (good and bad) but also want a smattering of types, TypeScript seems decent. It's like Closure Compiler but with a better syntax. If you want something that's a more aggressive step away from JS's syntax and semantics, then it seems like TypeScript isn't that.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/167070", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/65770/" ] }
167,147
Computers have trouble storing fractional numbers where the denominator is something other than a solution to 2^x. This is because the first digit after the decimal is worth 1/2, the second 1/4 (or 1/(2^1) and 1/(2^2)) etc. Why deal with all sorts of rounding errors when the computer could have just stored the decimal part of the number as another whole number (which is therefore accurate?) The only thing I can think of is dealing with repeating decimals (in base 10), but there could have been an edge solution to that (like we currently have with infinity).
There are actually modes of numbers that do that. Binary-coded decimal (BCD) arithmetic has the computer work in base 10. The reason you run into this rarely is that it wastes space: each individual digit of a number takes a minimum of four bits, whereas a computer could otherwise store up to 16 values in that space. (It can also be slower, but it's possible to have hardware-accelerated BCD math that works just fine.). This is, in fact, exactly what most calculators do, which is why there are certain classes of rounding problems you'll never hit on a $5 Casio that will eat your lunch on a desktop computer. The other route you can take is to use rational numbers--that is, a numerator and a denominator, stored as integers. This is actually available in nearly all languages, is exact, and allows you to store everything in native binary formats. The problem is that, at the end of the day, users probably do not want to see fractions like 463/13, nor even 35 and 8/13. They want to see 35.615..., and the moment you get there, you face all the typical problems. Add in that this format takes even more space, and can be significantly slower than floating point arithmetic, and you'll find no computers use this format by default. So: computers can do what you want, but it's slow and it wastes space, so they only do it when they really have to. The rest of the time, the speed and space savings of floating point are a better trade-off.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/167147", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/54164/" ] }
167,185
I've recently noticed a bit of a trend for my projects as of late. I use to run my own SVN server on my VPS, but recently the nail went in the coffin for that when I got my last project migrated from my server to a Mercurial repo on Bitbucket. What are some of the ramifications to this? (disregarding the change in version control systems) It seems like there has been a huge explosion in version control hosting, and companies like Bitbucket even offer private repos for free, and Github and other such services are extremely cheap now. Also, by using them you get the benefit of their infrastructure's speed and stability. What reasons are there these days to host your own version control? The only real reason I can think of is if your source code is super top secret.
There are actually modes of numbers that do that. Binary-coded decimal (BCD) arithmetic has the computer work in base 10. The reason you run into this rarely is that it wastes space: each individual digit of a number takes a minimum of four bits, whereas a computer could otherwise store up to 16 values in that space. (It can also be slower, but it's possible to have hardware-accelerated BCD math that works just fine.). This is, in fact, exactly what most calculators do, which is why there are certain classes of rounding problems you'll never hit on a $5 Casio that will eat your lunch on a desktop computer. The other route you can take is to use rational numbers--that is, a numerator and a denominator, stored as integers. This is actually available in nearly all languages, is exact, and allows you to store everything in native binary formats. The problem is that, at the end of the day, users probably do not want to see fractions like 463/13, nor even 35 and 8/13. They want to see 35.615..., and the moment you get there, you face all the typical problems. Add in that this format takes even more space, and can be significantly slower than floating point arithmetic, and you'll find no computers use this format by default. So: computers can do what you want, but it's slow and it wastes space, so they only do it when they really have to. The rest of the time, the speed and space savings of floating point are a better trade-off.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/167185", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/483/" ] }
167,250
Automapper is an "object-object mapper" for .Net, which means copying objects from a class into another class that represents the same thing. Why is this ever useful? Is the duplication of classes ever useful/good design?
A quick google search revealed this example: http://www.codeproject.com/Articles/61629/AutoMapper showing a perfectly valid usage of AutoMapper which is definitely not an example for a poor design. In a layered application, you may have objects in your data or business layer, and you sometimes need just a subset of the attributes of that data objects, or some kind of view to them in your UI layer. So you create a view model which contains objects with exactly the attributes you need in your UI, not more, and use AutoMapper to provide the content of that objects with less boilerplate code. In such a situation your "view objects" are not a duplicate of the original class. They have different methods and perhaps a few duplicate attributes. But that's ok as long as you use that view objects only for UI displaying purposes and don't start to misuse them for data manipulation or business operations. Another topic you may read to get a better understanding of this is Fowlers Command Query Responsibility Segregation pattern, in contrast to CRUD. It shows you situations where different object models for querying data and updating them in a database make sense. Here, mapping from one object model to another may also be done by a tool like AutoMapper.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/167250", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/57497/" ] }
167,304
I'm an ASP.NET Webforms developer, and I face some problems when I deal with designers. Designers always complain about the asp.net server controls . They'd rather just have an html file and create css files along with the required images to go with those. Sometimes, if the design phase is done in advance, I get html files with related css files, but then we face many problems integrating the design with the aspx files (sever controls an telerik controls ... etc). What I want to ask about is: How do I overcome these problems? The designers prefer php- and mvc developers because of the problems with .net server controls. I need to know how to interact with the designers in the correct way. Are there any tools or applications to provide the designers with the rendered (html page) of the .aspx pages? By that I mean the page in runtime rather than the aspx in Visual Studio. They do use Web Expression but they want the rendered page in html as well.
Former Designer here, turned Dev, and I used to piss and moan about Web Controls too. Honestly, its MUCH cheaper for a designer to adjust their practices than for a .NET Developer to delve into a custom impelmentation of a GridView because the designer INSISTED that each TD have a 'rel' tag (or whatever). As Arseni Mourzenko very wisely pointed out, the decision to use Webforms is a company choice that limits some of the control over the HTML while granting some efficiencies in coding. Unless the company is willing to re-consider (which they should NOT do just to please the designers), then the designers need to accept this reality. Here's a few things they can do: 1) Stop depending on ID's for anything . Even though this felt wrong at first, I found that life was actually a lot easier when I styled everything with classes (and inheritance, of course). First of all, it equalized all my selector weights. In CSS inheritance, ID trumps CLASS. It was actually kinda nice to have everything be a child and/or class selector, and made figuring out the specificity order a little simpler. Same thing in the JS layer, it gave me ZERO pain to swap out my ID-based selectors for class-based ones. 2) Teach them what RadioButtonLists and CheckboxLists convert into , along with Label = span, Panel = div and the other non-obvious control-to-html stuff. The way .NET renders these to HTML was a little weirder than I expected, and it was much easier for me to create screens when I knew how the HTML would come out of those controls. 3) Have them do their designers IN ASPX DIRECTLY , not raw HTML ( !important ) . Teach the designers the basics of GridViews, ListViews, etc. Give them some code snippets to push an anonymous object collection into a Grid/ListView control. If they can learn CSS then they can learn to copy-paste this code in. They can use the free version of VS Web Express, which is quite good at CSS & JS work now. These dummy web projects will give the designers a chance to enter some controls, then View Source to see how they are rendered. 4) Explain how the FORM tag is used in .NET . Forgot about this one earlier, but another thing the designer has to get used to is that usually, a single FORM tag wraps the entire page. This alters the way form controls behave, and you can't nest FORM tags without really weird side effects. Make sure the designers understand this or else their form HTML will be a nightmare to make into WebForms. 5) Stay away from Themes and Skin . Even though the .NET framework has these tools to help style controls across an application, they are clunky and weird to normal Web Designers, and I never found them worth my time. They seem like a nice tool to devs who aren't well versed in CSS, but will only slow down the designers. Let the designers work in their natural environment (html & css files) and they will be happier and more productive. 6) Keep "prototype" projects in your site solutions . To make sure the developers always have a target to code against, have the designers create a fake web project in your real solution to keep their ASPX-only pages preserved and untouched by the real developers. This means the designers can look back to their prototypes in the same solution as the real project to check on how the developers did, and the devs can run the prototype at any time to make sure their work matches the designers' intent. Finally, resist any complaints to convert to MVC, unless you are ready to re-train your Devs. I love MVC personally, but if you've got a team with a ton of WebForms knowledge, don't go throwing that away for no reason. If your applications are having ViewState problems, SEO problems, or accessibility issues, then absolutely give MVC a hard look. But it will take a LOT more time to train WebForms devs on MVC than it will to train Designers how to use Web Controls. At the end of the day, there was NO DESIGN I CAME ACROSS, that I couldn't personally make work in WebForms, even if I ended up swearing at that darn GridView for an hour before figuring it out. Are there any tools or applications to provide the designers with the rendered (html page) of the .aspx pages? Forget Expression (I never liked it). Get them the free version of Visual Studio (Web Developer Express). It can hook into whatever source control solution you have, and it will let the designers run their ASPX pages and see the rendered HTML in a browser. The CSS and JS tooling is much better than it used to be, and there are some awesome tools baked into extentions like Web Essentials. 1-click transforming of CSS rules into all their vendor-specific deviations, color pickers and palettes right in the VS interface, 1-click embedding of images INTO css files, 'LESS' CSS transformations (you can 'code' in CSS), F12 'Navigate To' on JavaScript, plus real intellisence, and way more. Its a treasure trove for designers now, FYI, and your designers might not know about it since earlier version of Visual Studio treated CSS/JS as second-class citizens.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/167304", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/26930/" ] }
167,305
I've been using python for a few days now and I think I understand the difference between dynamic and static typing. What I don't understand is under what circumstances it would be preferred. It is flexible and readable, but at the expense of more runtime checks and additional required unit testing. Aside from non-functional criteria like flexibility and readability, what reasons are there to choose dynamic typing? What can I do with dynamic typing that isn't possible otherwise? What specific code example can you think of that illustrates a concrete advantage of dynamic typing?
Since you asked for a specific example, I'll give you one. Rob Conery's Massive ORM is 400 lines of code. It's that small because Rob is able to map SQL tables and provide object results without requiring a lot of static types to mirror the SQL tables. This is accomplished by using the dynamic data type in C#. Rob's web page describes this process in detail, but it seems clear that, in this particular use case, the dynamic typing is in large part responsible for the brevity of the code. Compare with Sam Saffron's Dapper , which uses static types; the SQLMapper class alone is 3000 lines of code. Note that the usual disclaimers apply, and your mileage may vary; Dapper has different goals than Massive does. I just point this out as an example of something that you can do in 400 lines of code that probably wouldn't be possible without dynamic typing. Dynamic typing allows you to defer your type decisions to runtime. That's all. Whether you use a dynamically-typed language or a statically-typed one, your type choices must still be sensible. You're not going to add two strings together and expect a numeric answer unless the strings contain numeric data, and if they do not, you're going to get unexpected results. A statically typed language will not let you do this in the first place. Proponents of statically type languages point out that the compiler can do a substantial amount of "sanity checking" your code at compile time, before a single line executes. This is a Good Thing™. C# has the dynamic keyword, which allows you to defer the type decision to runtime without losing the benefits of static type safety in the rest of your code. Type inference ( var ) eliminates much of the pain of writing in a statically-typed language by removing the need to always explicitly declare types. Dynamic languages do seem to favor a more interactive, immediate approach to programming. Nobody expects you to have to write a class and go through a compile cycle to type out a bit of Lisp code and watch it execute. Yet that's exactly what I'm expected to do in C#.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/167305", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/66745/" ] }
167,492
I work for a great little software company which makes good revenue from our main software package. The problem for me is that it's almost unmaintainable. It's written in Delphi 7 (has upgraded versions over time) and has been worked on by a lot of developers over the past 20 or so years. The software lacks any meaningful architecture - there's no object orientation whatsoever, horrible amounts of cyclical dependencies and an over-reliance on global variables to name just a few things. Another huge thing for me is Delphi 7 does NOT support 64-bit. The problem here for me is that my management team don't care about technical things, they want to know why they should care. Obviously that's expected, so what I'm asking here is for some guidance, or tales, or pitfalls about this kind of thing. There's a few things I would love to include, namely for me, the length of time taken to debug/write a feature in "legacy" code, versus coherent, well structured OO code. Does anyone know of any blog posts or the like where this is talked about? For us in the company this is a huge reason. Despite being decent developers we feel like writing a new feature is just piling more rubbish on top. On top of that, even for me who has a decent level of understanding of the code, changing things is infuriating - a small change can have a ridiculous domino effect. Anyone have any experiences they'd like to share?
Your software is maintainable. That is why you are working on it. In fact, the revenue they get from selling and supporting that software probably supports your loaded salary. If I was managing something that works, I would leave it alone. If you had a house with a convoluted plumbing architecture, would you pay a plumber to clean up the architecture even though it works perfectly well right now? Probably not. As a software developer, I understand why you want a good architecture, object-orientation, et cetera, so you don't have to sell me, or any software developer. I would bet that software product is in "milk-the-cow" mode. They are just trying to get as much profit out of it as possible from existing customers before it dies. If that is not true, then what are the architectural scenarios coming down the pipe? More of the same data? New data? Platform or tool deprecation? High performance algorithms? Expanding customer base? New features? New platforms? Mobile? If there are no new business conditions which will drive you to change the architecture, then the architecture will not change? If your architectural is at the critical point where the next change will cause an incredibly expensive fix, then you'll have to wait for that to happen before they are willing to pay for a change. The upside of this gig is that they probably can't fire you, because Delphi skills are relatively hard to find. And there is job security when you are supporting a profitable product.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/167492", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/37312/" ] }
167,607
I have a condition if(exists && !isDirectory || !exists) {} how can I modify it, so that it may be more understandable.
|| is commutative so if(!exists || (exists && !isDirectory)) is equivalent. Now because exists is always true in the second part of the || you can drop the && : if(!exists || !isDirectory) Or you can go a step further and do: if(!(exists && isDirectory))
{ "source": [ "https://softwareengineering.stackexchange.com/questions/167607", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/65782/" ] }
167,701
For a few decades the programming language of choice for AI was either Prolog or LISP , and a few more others that are not so well known . Most of them were designed before the 70's. Changes happens a lot on many other domains specific languages, but in the AI domain it hadn't surfaced so much as in the web specific languages or scripting etc. Are there recent programming languages that were intended to change the game in the AI and learn from the insufficiencies of former languages?
The AI course I participated in online, taught at Stanford, recommended that Python be used for the homework. I believe Georgia Tech still uses LISP. The fallacy here is "new" is "good". AI research is one of the oldest computing research disciplines. It keeps calving off subfields as people realize that techniques from it can be used elsewhere. Language Processing, Machine Learning, and Data Mining are all examples of "practical" applications that use a huge host of languages. So it's less that the main field has changed than it has been refined into a massive array of related disciplines. It's much like saying "Scientific Computing" and expecting it to just mean solving Linear Equations. The languages you've mentioned have evolved quite a lot on the last 20 or 30 years. Lisp spawned Common Lisp and Clojure. Prolog spawned Visual Prolog (it has objects...) and Mercury (take Haskell and Prolog, lock them in a room together...stand well away and get ready to run). Given that AI research is more theoretic, it makes sense that it would focus on the theory (math) rather than the practicalities (languages). All that being said, the biggest innovator of AI technologies I'd wager is Google. They tend to favor Python (and Go and Dart but that's beside the point). Thus I'd say Python is the "recent language of choice" but you could also use Haskell or OCaml or F# or C# or even Java.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/167701", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/61497/" ] }
167,723
What things should absolutely always/never be included in a header file? Functions: what must go in the header file and what mustn't Constants: Is it good practice to define lot of constants in a header file? Any other good practices for C header files? Example I'm writing a parser for a documented industry standard format that has a lots of constants , is it a good practice to define them in a header file?
What to put in headers: The minimal set of #include directives that are needed to make the header compilable when the header is included in some source file. Preprocessor symbol definitions of things that need to be shared and that can only accomplished via the preprocessor. Even in C, preprocessor symbols are best kept to a minimum. Forward declarations of structures that are needed to make the structure definitions, function prototypes, and global variable declarations in the body of the header compilable. Definitions of data structures and enumerations that are shared amongst multiple source files. Declarations for functions and variables whose definitions will be visible to the linker. Inline function definitions, but take care here. What doesn't belong in a header: Gratuitous #include directives. Those gratuitous includes cause recompilation of things that don't need to be recompiled, and can at times make it so a system can't compile. Don't #include a file in a header if the header itself doesn't need that other header file. Preprocessor symbols whose intent could be accomplished by some mechanism, any mechanism, other than the preprocessor. Lots and lots of structure definitions. Split those up into separate headers. Inline definitions of functions that require an additional #include , that are subject to change, or that are too big. Those inline functions should have little if any fan out, and if they do have fan out, it should be localized to stuff defined in the header. What constitutes the minimal set of #include statements? This turns out to be a nontrivial question. A TL;DR definition: A header file must include the header files that directly define each of the types directly used in or that directly declare each of the functions used in the header file in question, but must not include anything else. A pointer or C++ reference type does not qualify as direct use; forward references are preferred. There is a place for a gratuitous #include directive, and this is in an automated test. For every header file in a software package, I automatically generate and then compile the following: #include "path/to/random/header_under_test" int main () { return 0; } The compilation should be clean (i.e., free of any warnings or errors). Warnings or errors regarding incomplete types or unknown types mean that the header file under test has some missing #include directives and/or missing forward declarations. Note well: Just because the test passes does not mean that the set of #include directives is sufficient, let alone minimal. Related Naming Include Guards
{ "source": [ "https://softwareengineering.stackexchange.com/questions/167723", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/65614/" ] }
167,752
The development team that I'm a member of has recently adapted to work according to Agile practices. This has personally highlighted the fact that I can't stop myself gold-plating code (and documentation) and I consequently exceed original estimates, when I could've delivered solutions that meet the requirements much earlier. I think my ethic is bordering on the obsessive in that I become too attached to my code and am rarely content to release before I've refactored and perfected it to the nth degree. I am happy that I have realised this but how can I change my attitude/mentality to be content with my progress and release on-time instead?
The best is the enemy of good enough. You can always do more testing, write better documentation, ferret out those corner cases, fill in what you think are missing features, make the architecture cleaner. It's never ending. However, it has to end. There are due dates that have to be met, external constraints that depend on your part of the product being finished. Striving for perfection in one small part of a product hurts the product as a whole.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/167752", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/61423/" ] }
167,777
Let's say we have a class called 'Automobile' and we have an instance of that class called 'myCar'. I would like to ask why do we need to put the values that our methods return in a variable? Why don't we just call the method? For example, why should one write: string message = myCar.SpeedMessage(); Console.WriteLine(message); instead of: Console.WriteLine(myCar.SpeedMessage());
Short answer: We don't. Both examples are absolutely fine. There are three reasons why people use temporary variables anyway (like in your first example): It gives an explicit name to the intermediate value (we now know that it's a message , not just any old string). It helps prevent statements getting too long and too complex. It makes step-debugging easier, because you can step over each part individually (although there are step debuggers that work at sub-line precision).
{ "source": [ "https://softwareengineering.stackexchange.com/questions/167777", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/26782/" ] }
167,802
I was reviewing my notes and stumbled across the implementation of different sorting algorithms. As I attempted to make sense of the implementation of QuickSort and MergeSort, it occurred to me that although I do programming for a living and consider myself decent at what I do, I have neither the photographic memory nor the sheer brainpower to implement those algorithms without relying on my notes. All I remembered is that some of those algorithms are stable and some are not. Some take O(nlog(n)) or O(n^2) time to complete. Some use more memory than others... I'd feel like I don't deserve this kind of job if it weren't because my position doesn't require that I use any sorting algorithm other than those found in standard APIs. I mean, how many of you have a programming position where it actually is essential that you can remember or come up with this kind of stuff on your own?
Let's ask Albert and see what he has to say on the subject: “I don't need to know everything, I just need to know where to find it, when I need it” -- Albert Einstein , paraphrased Amen, Brother Albert, Amen. Once you've made a good survey of the essential algorithms in any particular discipline (sort, search, whatever), you can then forget about the implementation details until you actually need the algo, in which case you go look it up or use a preexisting lib. 25 years ago I built a major search system using B*-trees, but today I would need to RTFM in order to use them well.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/167802", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/67089/" ] }
167,975
I realize some things are easier/harder in one language than the other, but I'm only interested in type-related features that are possible in one and impossible/irrelevant in the other. To make it more specific, let's ignore Haskell type extensions since there's so many out there that do all kinds of crazy/cool stuff.
("Java", as used here, is defined as standard Java SE 7 ; "Haskell", as used here, is defined as standard Haskell 2010 .) Things that Java's type system has but that Haskell's doesn't: nominal subtype polymorphism partial runtime type information Things that Haskell's type system has but that Java's doesn't: bounded ad-hoc polymorphism gives rise to "constraint-based" subtype polymorphism higher-kinded parametric polymorphism principal typing EDIT: Examples of each of the points listed above: Unique to Java (as compared to Haskell) Nominal subtype polymorphism /* declare explicit subtypes (limited multiple inheritance is allowed) */ abstract class MyList extends AbstractList<String> implements RandomAccess { /* specify a type's additional initialization requirements */ public MyList(elem1: String) { super() /* explicit call to a supertype's implementation */ this.add(elem1) /* might be overridden in a subtype of this type */ } } /* use a type as one of its supertypes (implicit upcasting) */ List<String> l = new ArrayList<>() /* some inference is available for generics */ Partial runtime type information /* find the outermost actual type of a value at runtime */ Class<?> c = l.getClass // will be 'java.util.ArrayList' /* query the relationship between runtime and compile-time types */ Boolean b = l instanceOf MyList // will be 'false' Unique to Haskell (as compared to Java) Bounded ad-hoc polymorphism -- declare a parametrized bound class A t where -- provide a function via this bound tInt :: t Int -- require other bounds within the functions provided by this bound mtInt :: Monad m => m (t Int) mtInt = return tInt -- define bound-provided functions via other bound-provided functions -- fullfill a bound instance A Maybe where tInt = Just 5 mtInt = return Nothing -- override defaults -- require exactly the bounds you need (ideally) tString :: (Functor t, A t) => t String tString = fmap show tInt -- use bounds that are implied by a concrete type (e.g., "Show Int") "Constraint-based" subtype polymorphism (based on bounded ad-hoc polymorphism) -- declare that a bound implies other bounds (introduce a subbound) class (A t, Applicative t) => B t where -- bounds don't have to provide functions -- use multiple bounds (intersection types in the context, union types in the full type) mtString :: (Monad m, B t) => m (t String) mtString = return mtInt -- use a bound that is implied by another bound (implicit upcasting) optString :: Maybe String optString = join mtString -- full types are contravariant in their contexts Higher-kinded parametric polymorphism -- parametrize types over type variables that are themselves parametrized data OneOrTwoTs t x = OneVariableT (t x) | TwoFixedTs (t Int) (t String) -- bounds can be higher-kinded, too class MonadStrip s where -- use arbitrarily nested higher-kinded type variables strip :: (Monad m, MonadTrans t) => s t m a -> t m a -> m a Principal typing This one is difficult to give a direct example of, but it means that every expression has exactly one maximally general type (called its principal type ), which is considered the canonical type of that expression. In terms of "constraint-based" subtype polymorphism (see above), the principal type of an expression is the unique subtype of every possible type that that expression can be used as. The presence of principal typing in (unextended) Haskell is what allows complete type inference (that is, successful type inference for every expression, without any type annotations needed). Extensions that break principal typing (of which there are many) also break the completeness of type inference.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/167975", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/-1/" ] }
168,005
I attended a in-person interview recently and performed well. But surprisingly I got rejected. When I asked the HR for reason, he contacted the technical interviewer and told me that I was syntactically wrong while coding. I used Google Guava for coding. So my code looked like this: List<String> items = Lists.newArrayList() instead of List<String> items =new ArrayList<String>(); I know that the code will compile and work as expected.Is it ok to use third party libraries like Google Guava in interviews?
If you are doing something, non-standard at an interview - ask. Or at least say you are using Guava. I do ask a simple Java question early in an interview. I usually tell candidates they can assume anything they want so long as they tell me what it is. In a simple question like this, if the candidate asked to use Guice, I'd probably say not to - just use core Java. I need to know candidates know core Java. What happens if the interviewer doesn't know the library you chose or it isn't available at the company? (Not everything open source can be used at every company.) Try to avoid being clever at interviews. Too many "Java developers" don't know how to write Java. The interviewer has to assume incompetence before cleverness.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/168005", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/22891/" ] }
168,058
What are graphs, in computer science, and what are they used for? In laymen's terms preferably. I have read the definition on Wikipedia : In computer science, a graph is an abstract data type that is meant to implement the graph and hypergraph concepts from mathematics. A graph data structure consists of a finite (and possibly mutable) set of ordered pairs, called edges or arcs, of certain entities called nodes or vertices. As in mathematics, an edge (x,y) is said to point or go from x to y. The nodes may be part of the graph structure, or may be external entities represented by integer indices or references. but I'm looking for a less formal, easier to understand definition.
A perfect layman's example might be Facebook . The network of you, your friends, and their friends etc, are collectively refered to as the social graph . In this "graph" the people are considered nodes of the graph and the edges are friendship links . In Facebook friend is a bidirectional relationship (A is B's Friend => B is A's friend) so the graph is an Undirected Graph . A network like Google+ or Twitter would be considered a Directed Graph since the direction of the relationship has meaning here. All of these graphs are refered to as cyclic graphs, as the relationships between nodes can form cycles. A Family Tree , on the other hand, is a special kind of graph which, among other things, is Acyclic since there cannot be cycles in family tree relatioship. (It is technically called a Directed Acyclic Graph (DAG) since its both directed and acyclic) This should cover all of the basic jargon involving graphs, so now you should be able to follow the rest of the material in the field.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/168058", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/66745/" ] }
168,089
I like XP (extreme programming), especially the part where there are 2 programmers at the same screen, since a problem's solution is often found more quickly if only you explain what you're doing and pair programming forces you to explain what you're doing. Over the last 10 years or so, the XP style of working seems to have gone out of date in favor of the working methodologies: Agile and/or Kanban. Why? Since XP seems to me to be a very good way to work and is a lot about the programming whereas Agile and Kanban are more about processes.
There's a lot of different styles, methods and mindsets related to the whole field development, and everything has it's own, shiny name. Agile is just a mindset that moves away from the usual, static programming models (like waterfalls) - it's primary goal is to achieve more flexible development and (at the very end) better software and happy customers. Below agile, there are a lot of different models like Scrum, Kanban, XP. Especially Kanban doesn't come from software development originally, it originates in building cars (i remind Toyota introduced it for building cars and some software developers adopted and expanded it) Pair programming, code reviews and such stuff are just tools - you can (and should) always do it during a project, regardless which method you use. It's just that this stuff is more native to agile than to static. XP more or less introduced these things (or at least gave them a shiny name) and all the following stuff adopted them because it simply worked out good.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/168089", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/12893/" ] }
168,146
I have forgotten a slang programming term. This thing is an intentional bug or a decoy feature used as a distraction. An example usage, "Hey Bob, QA is doing a review today. Put a $THING into the module so they actually have a problem to find". This can be used negatively, to have a very obvious intentional flaw to discover as a distraction from a real problem. This can also be used positively. Its like how you always let rescue dogs 'find' a victim when searching a disaster area. It can also be used to verify that a QA process is actually catching flaws. What is the term I am looking for?
A Duck From http://www.codinghorror.com/blog/2012/07/new-programming-jargon.html : A feature added for no other reason than to draw management attention and be removed, thus avoiding unnecessary changes in other aspects of the product.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/168146", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/23363/" ] }
168,260
I am the only developer working on a web application which is nearing to its end. Now we are looking into making it Live in maybe a couple of months time. This is a web application for a non IT company. Though they have their own internal IT team, they have asked me on what will be the hardware requirements for the live servers eg. RAM, 32 bit or 64 bit. Shouldn't the internal IT team be doing this or since I am the only person working on the project is it my resposiblity to let them know of the any specific hardware requirements which may impact the performance of the project? The reason I am asking this question is that, I have not done this before. All the times I used to be given a server and asked to deploy apps on it. I never used to worry about the server configuration, etc.
Though they have their own internal IT team, they have asked me on what will be the hardware requirements for the live servers eg. RAM, 32 bit or 64 bit. Perhaps they figure that as the developer, you have more insight into the app's requirements than they do. You've presumably been running the application and know how much memory it requires under different loads. From the IT department's point of view, they're happy to supply whatever your application needs. They could probably figure out what the application requires through trial and error, or they could ask the one guy in the company who's likely to have some insight into the application's behavior for his opinion. It's not uncommon for developers to be asked to do things that aren't strictly in their job description . You might have to write some documentation, even though there's a technical writer on staff. You might need to participate in the testing process even though there's a QA department. Or you might be asked to help write a proposal even though there's a business analyst on the project. This is normal -- you're part of a team, and your main concern should be helping the team succeed. It's also good for you since it expands your experience and helps you understand what the other team members do, and it's good for the company since it spreads knowledge around.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/168260", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/20163/" ] }
168,316
The main idea behind OOP is to unify data and behavior in a single entity - the object. In procedural programming there is data and separately algorithms modifying the data. In the Model-View-Controller pattern the data and the logic/algorithms are placed in distinct entities, the model and the controller respectively. In an equivalent OOP approach shouldn't the model and the controller be placed in the same logical entity?
MVC is an exercise in Separation of Concerns , a UI architecture. It is a way to corral the complexity that can occur in user interfaces due to the presentation not being separated from the content . In theory, all objects can have behavior that operate on the data they contain, and that data and behavior remain encapsulated . In practice, a given OOP object may or may not have logic that corresponds to its data, or may not have any logic at all (a Data Transfer Object , for example). In MVC, the business logic goes in the model, not the controller. The controller is really just a go-between to glue together the View and the Model. So in the model, you can have data and behavior in the same place. But even that arrangement does not guarantee strict data/behavior fusion. Objects containing only data can be operated on by other classes containing only logic, and this is a perfectly acceptable use of OOP. I'll give you a specific example. This is a bit contrived, but let's say you have a Currency object, and that object has the ability to represent itself in any available currency, pegged to the dollar. So you would have methods like: public decimal Yen { get { return // dollars to yen; } } public decimal Sterling { get { return // dollars to sterling; } } public decimal Euro { get { return // dollars to euro; } } ...and that behavior would be encapsulated with the Currency object. But what if I wanted to transfer the currency from one account to another, or deposit some currency? Would that behavior also be encapsulated in the Currency object? No, it wouldn't. The money in your wallet cannot transfer itself out of your wallet into your bank account; you need one or more agents (a teller or ATM) to assist in getting that money into your account. So that behavior would be encapsulated into a Teller object, and it would accept Currency and Account objects as inputs, but it would not contain any data itself, except maybe a bit of local state (or maybe a Transaction object) to help process the input objects.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/168316", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/57792/" ] }
168,367
Is there a way this problem could benefit from a solution with multiple threads, rather than a single thread? In an interview, I was asked to solve a problem using multiple threads. It appears to me that the multiple threads lends no benefit. Here's the problem: You are given a paragraph , which contain n number of words, you are given m threads. What you need to do is , each thread should print one word and give the control to next thread, this way each thread will keep on printing one word , in case last thread come, it should invoke the first thread. Printing will repeat until all the words are printed in paragraph. Finally all threads should exit gracefully. What kind of synchronization will use? I strongly feel we cannot take any advantage of threads here, but believe the interviewer is trying to measure my synchronization skills. Am I missing something in this problem that would make multiple threads have value? No need of code, just put some thoughts. I will implement by myself.
It sounds to me like they are leading you toward a semaphore solution. Semaphores are used to signal another thread that it's their turn. They are used much less frequently than mutexes, which I guess is why they think it's a good interview question. It's also why the example seems contrived. Basically, you would create m semaphores. Each thread x waits on semaphore x then posts to semaphore x+1 after doing its thing. In pseudocode: loop: wait(semaphore[x]) if no more words: post(semaphore[(x+1) % m]) exit print word increment current word pointer post(semaphore[(x+1) % m])
{ "source": [ "https://softwareengineering.stackexchange.com/questions/168367", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/30609/" ] }
168,494
Is it important to point out the good parts of the code during a code review and the reasons why it is good? Positive feedback might be just as useful for the developer being reviewed and for the others that participate in the review. We are doing reviews using an online tool, so developers can open reviews for their committed code and others can review their code within a given time period (e.g. 1 week). Others can comment on the code or other reviewer's comments. Should there be a balance between positive and negative feedback?
Improve Quality and Morale Using Peer Code Reviews http://www.slideshare.net/SmartBear_Software/improve-quality-and-morale-using-peer-code-reviews Things Everyone Should Do: Code Review http://scientopia.org/blogs/goodmath/2011/07/06/things-everyone-should-do-code-review/ Both of these articles state that one of the purposes of code review is to share knowledge about good development techniques, not just find errors. So I'd say it's very important. Who wants to go to a meeting and only be criticized?
{ "source": [ "https://softwareengineering.stackexchange.com/questions/168494", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/16992/" ] }
168,606
I'm seeing a lot of instantiable classes in the C++ and Java world that don't have any state. I really can't figure out why people do that, they could just use a namespace with free functions in C++, or a class with a private constructor and only static methods in Java. The only benefit I can think of is that you don't have to change most of your code if you later decide that you want a different implementation in certain situations. But isn't that a case of premature design? It could be turned into a class later, when/if it becomes appropriate. Am I getting this wrong? Is it not OOP if I don't put everything into objects (i.e. instantiated classes)? Then why are there so many utility namespaces and classes in the standard libraries of C++ and Java? Update: I've certainly seen a lot examples of this in my previous jobs, but I'm struggling to find open source examples, so maybe it's not that common after all. Still, I'm wondering why people do it, and how common it is.
I'm seeing a lot of instantiable classes in the C++ and Java world that don't have any state. Some possibile reasons to create classes without ivars of their own: State is or could be contained in a superclass. Class implements some interface and needs to be instantiable so that instances can be passed to other objects. Class is intended to be subclassed. Handy way to group related functions. (Yes, there may be better or different ways to do the same.) Am I getting this wrong? Is it not OOP if I don't put everything into objects (i.e. instantiated classes)? OOP is a paradigm, not a law of nature. There are some languages where everything is an object, so you really don't have a choice. Other languages (e.g. C) don't provide any support for OOP at all, but you can still program in an object oriented style. I'd say you can have OOP if you don't do everything in classes... you might say that you just have less OOP in that case.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/168606", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/53493/" ] }
168,685
Wikipedia's definition of software rot focuses on the performance of the software. This is a different usage than I am used to; I had thought of it much more in terms of the cleanliness and design of the code—in terms of the code's having all the standard quality characteristics: readability, maintainability, etc. Now, performance is likely to go down when the code becomes unreadable, because no one knows what is going on. But does the term software rot have special reference to performance? or am I right in thinking it refers to the cleanliness of the code? or is this perhaps a case of multiple senses of the term being in common usage—from the user's perspective, it has do with performance; but for the software craftsman, it has to do more specifically with how the code reads?
The term is not performance related, at least not anywhere I have seen it used. It is specifically about code that is not maintained well and becomes... dirty... rotten. It is about code whose design has not been updated as changes were made and is difficult to read and understand.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/168685", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/47624/" ] }
169,920
I'm a Python programmer learning C# who is trying to stop worrying and just love C# for what it is, rather than constantly comparing it back to Python. I'm caught up on one point: the lack of explicitness about where things are defined, as detailed in this Stack Overflow question . In short: in C#, using foo doesn't tell you what names from foo are being made available, which is analogous to from foo import * in Python -- a form that is discouraged within Python coding culture for being implicit rather than the more explicit approach of from foo import bar . I was rather struck by the Stack Overflow answers to this point from C# programmers, which was that in practice this lack of explicitness doesn't really matter because in your IDE (presumably Visual Studio) you can just hover over a name and be told by the system where the name is coming from. E.g.: Now, in theory I realise this means when you're looking with a text editor, you can't tell where the types come from in C#... but in practice, I don't find that to be a problem. How often are you actually looking at code and can't use Visual Studio? This is revelatory to me. Many Python programmers prefer a text editor approach to coding, using something like Sublime Text 2 or vim, where it's all about the code, plus command line tools and direct access and manipulation of folders and files. The idea of being dependent on an IDE to understand code at such a basic level seems anathema. It seems C# culture is radically different on this point. And I wonder if I just need to accept and embrace that as part of my learning of C#. Which leads me to my question here: is C# development effectively inseparable from the IDE you use?
Visual Studio is so convenient that after working with it for a while it is difficult to use a different IDE. It has a lot of handy tools and a bunch of plugins available, so practically it has every feature you would need. On the other hand, whatever language you learn, it is recommended to use command line at the beginning, so you can better understand how it works. C# isn't an exception. is C# development effectively inseparable from the IDE you use? Theoretically no, but practically yes. It is possible to write in C# using a text editor and command line, but if you have Visual Studio, you'd never do this. In fact very few programmers have ever executed C# code from command line. BTW If you feel inconvenient with using foo , you can use the whole path when using a type.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/169920", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/68797/" ] }
169,925
So looking around earlier I noticed some comments about long methods being bad practice. I am not sure I always agree that long methods are bad (and would like opinions from others). For example I have some Django views that do a bit of processing of the objects before sending them to the view, a long method being 350 lines of code. I have my code written so that it deals with the parameters - sorting / filtering the queryset, then bit by bit does some processing on the objects my query has returned. So the processing is mainly conditional aggregation, that has complex enough rules it can't easily be done in the database, so I have some variables declared outside the main loop then get altered during the loop. variable_1 = 0 variable_2 = 0 for object in queryset : if object.condition_condition_a and variable_2 > 0 : variable 1+= 1 ..... ... . more conditions to alter the variables return queryset, and context So according to the theory I should factor out all the code into smaller methods, so That I have the view method as being maximum one page long. However having worked on various code bases in the past, I sometimes find it makes the code less readable, when you need to constantly jump from one method to the next figuring out all the parts of it, while keeping the outermost method in your head. I find that having a long method that is well formatted, you can see the logic more easily, as it isn't getting hidden away in inner methods. I could factor out the code into smaller methods, but often there is is an inner loop being used for two or three things, so it would result in more complex code, or methods that don't do one thing but two or three (alternatively I could repeat inner loops for each task, but then there will be a performance hit). So is there a case that long methods are not always bad? Is there always a case for writing methods, when they will only be used in one place? UPDATE: Looks like I asked this question a over a year ago. So I refactored the code after the (mixed) response here, split it into methods. It is a Django app retrieving complex sets of related objects from the database, so the testing argument is out (it would have probably taken most of the year to create relevant objects for the test cases . I have a "this needs done yesterday" type work environment before anyone complains). Fixing bugs in that part of the code is marginally easier now, but not massively so. before : #comment 1 bit of (uncomplicated) code 1a bit of code 2a #comment 2 bit of code 2a bit of code 2b bit of code 2c #comment 3 bit of code 3 now: method_call_1 method_call_2 method_call_3 def method_1 bit of (uncomplicated) code 1a bit of code 2a def method_2 bit of code 2a bit of code 2b bit of code 2c def method_3 bit of code 3
No, long methods are not always bad. In the book Code Complete , it is measured that long methods are sometimes faster and easier to write, and don't lead to maintenance problems. In fact, what is really important is to stay DRY and respect separation of concerns. Sometime, the computation is just long to write, but really won't cause issue in the future. However, from my personal experience, most long methods tend to lack separation of concern. In fact, long methods are an easy way to detect that something MAY be wrong in the code, and that special care is required here when doing code review. EDIT: As comments are made, I add an interesting point to the answer. I would in fact also check complexity metrics for the function (NPATH, cyclomatic complexity or even better CRAP). In fact, I recommend to not checking such metrics on long functions, but to include alert on them with automated tools (such as checkstyle for java for instance) ON EVERY FUNCTION.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/169925", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/68808/" ] }
169,962
I've been looking at some job postings and noticed that a fair amount of them list IDEs under the 'required skills' section, even for senior positions. This is not localized to one company either, but rather it's something that comes up once in every few postings. I am perplexed by this job requirement, as my mentors and some of the best coders I've seen in my life were VIM/Emacs ninjas. Similarly, when I work with people I don't much care what tools they use as long as they are productive on the team. Can someone please explain the rationale behind hiring managers making IDEs an official job requirement?
If the organization has standardized on a singular IDE or development environment, then they might call that out in the job description/posting since it's a skill that would separate one candidate from another during the screening and interviewing process. However, just because it's a requirement doesn't mean that it's really a requirement and companies might hire someone who doesn't meet every single identified "requirement" .
{ "source": [ "https://softwareengineering.stackexchange.com/questions/169962", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/63971/" ] }
170,003
I have been programming for couple of years and am generally good when it comes to fixing problems and creating small-to-medium scripts, however, I'm generally not good at designing large scale programs in object oriented way. Few questions Recently, a colleague who has same number of years of experience as me and I were working on a problem. I was working on a problem longer than him, however, he came up with a better solution and in the end we're going to use his design. This really affected me. I admit his design is better, but I wanted to come up with a design as good as his. I'm even contemplating quitting the job. Not sure why but suddenly I feel under some pressure e.g. what would juniors think of me and etc? Is it normal? Or I'm thinking a little too much into this? My job involves programming in Python. I try to read source code but how do you think I can improve me design skills? Are there any good books or software that I should study? Please enlighten me. I will really appreciate your help.
I think this is a very positive sign of your skills. It is far more common for people who have difficulty coming up with the 'better' design in a team to be completely incapable of recognizing why another design is better. You have two really great (and surprisingly uncommon) strengths going for you: You are capable of assessing your designs against others objectively You have the desire and put forth effort to make your designs optimal You're only a couple years in and have a long way to go, but with this attitude you will definitely get there, just don't give up; we all deal with mental set backs like this. As often as I get a chance I like to plug Design Principles (NOT the same as design patterns) and I think this is a perfect example of where they come in handy. Study them and practice applying them in your designs, you will before you know it have taken another step forward in this regard. At the end of the day remember, designing is hard. We're dealing with complex high level abstractions every day, to create these from thin air, have them work well, and easy to use by colleagues is an extremely difficult task. It takes practice, for years . So chin up and just remember: there's a bunch of folks out there who can't assess two designs and actually recognize one as preferable over another, how well do you think they're getting along in creating good designs? Edit: 'nother tip, after getting your head around principles and practicing their application a bit, I think there is another gem from another question here speaking to the value of studying a variety of languages which have different purposes and rules: Ideally, every programmer should know a language from each class. What could you learn: A static typed OOP mainstream language: Java, C# (mostly used in enterprise software) and C++ (system programming and complex desktop applications) A prototype-based OOP language: Javascript (client side web programming) A procedural language: C (embedded software and system programming) A functional language: Haskell, ML or Lisp (functional languages are good for highly parallelized software). A logic programming language (Prolog) probably is not that useful in industry, being used mostly in research in AI. This will help to broaden the variety of ideas that come to mind when trying to design a solution.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/170003", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/68841/" ] }
170,093
When I'm using C# to write some code and I define an interface using Visual Studio 2010, it always includes a number of "using" statements (as shown in the example) using System; using System.Collections.Generic; using System.Linq; using System.Text; namespace TestEngine.TestNameSpace { interface ITest1 { bool testMethod(int xyz); } } I wonder what these are for and if they are really necessary. Can I leave these out? Are they only necessary when I'm using those parts in my interface description?
Visual Studio doesn't know what code you intend to write so includes the most common namespaces for you by default in the "new class" template. This is done so you don't have to resolve all the references for every single line of new code you write. Once you have written your basic code you can safely remove the ones you don't need. You will have to add any back that you subsequently need to reference in any subsequent code you write. If you right click and select Organize Usings > Remove and Sort it will delete any that are unused. There are also extensions that will remove and sort the namespaces automatically on saving each file.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/170093", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/26120/" ] }
170,138
Say we have a list of Task entities, and a ProjectTask sub type. Tasks can be closed at any time, except ProjectTasks which cannot be closed once they have a status of Started. The UI should ensure the option to close a started ProjectTask is never available, but some safeguards are present in the domain: public class Task { public Status Status { get; set; } public virtual void Close() { Status = Status.Closed; } } public class ProjectTask : Task { public override void Close() { if (Status == Status.Started) throw new Exception("Cannot close a started Project Task"); base.Close(); } } Now when calling Close() on a Task, there is a chance the call will fail if it is a ProjectTask with the started status, when it wouldn't if it was a base Task. But this is the business requirements. It should fail. Can this be regarded as a violation of the Liskov substitution principle ?
Yes, it is a violation of the LSP. Liskov Substitution Principle requires that Preconditions cannot be strengthened in a subtype. Postconditions cannot be weakened in a subtype. Invariants of the supertype must be preserved in a subtype. History constraint (the "history rule"). Objects are regarded as being modifiable only through their methods (encapsulation). Since subtypes may introduce methods that are not present in the supertype, the introduction of these methods may allow state changes in the subtype that are not permissible in the supertype. The history constraint prohibits this. Your example breaks the first requirement by strengthening a precondition for calling the Close() method. You can fix it by bringing the strengthened pre-condition to the top level of the inheritance hierarchy: public class Task { public Status Status { get; set; } public virtual bool CanClose() { return true; } public virtual void Close() { Status = Status.Closed; } } By stipulating that a call of Close() is valid only in the state when CanClose() returns true you make the pre-condition apply to the Task as well as to the ProjectTask , fixing the LSP violation: public class ProjectTask : Task { public override bool CanClose() { return Status != Status.Started; } public override void Close() { if (!CanClose()) throw new Exception("Cannot close a started Project Task"); base.Close(); } }
{ "source": [ "https://softwareengineering.stackexchange.com/questions/170138", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/33723/" ] }
170,222
I was following this highly voted question on possible violation of Liskov Substitution principle. I know what the Liskov Substitution principle is, but what is still not clear in my mind is what might go wrong if I as a developer do not think about the principle while writing object-oriented code.
I think it's stated very well in that question which is one of the reasons that was voted so highly. Now when calling Close() on a Task, there is a chance the call will fail if it is a ProjectTask with the started status, when it wouldn't if it was a base Task. Imagine if you will: public void ProcessTaskAndClose(Task taskToProcess) { taskToProcess.Execute(); taskToProcess.DateProcessed = DateTime.Now; taskToProcess.Close(); } In this method, occasionally the .Close() call will blow up, so now based on the concrete implementation of a derived type you have to change the way this method behaves from how this method would be written if Task had no subtypes that could be handed to this method. Due to liskov substitution violations, the code that uses your type will have to have explicit knowledge of the internal workings of derived types to treat them differently. This tightly couples code and just generally makes the implementation harder to use consistently.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/170222", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/60189/" ] }
170,246
Possible Duplicate: How to manage a Closed Source High-Risk Project? I'm working on an institution that has a really strong sense of "possession" - each line of software we write should be only ours. Ironically, I'm the only programmer (ATM), but we're planning in hiring others. Since my bosses wouldn't count the new programmers as people they can trust, they have an issue with the copies of the source code. We use Git, so they would have a entire copy of each of the projects they work on, when they clone the repository. We can restrict access to them to a single key with Gitolite and bind that to their PC's, but they can copy those keys to another computer and they would have the repository access in another PC. Also (and the most obvious method) they could just upload the files somewhere else, add another remote, or just copy the files to an USB drive. Is there any (perhaps clever) way to prevent events like these? EDIT: I would like to thank everyone for their insights in this question, since it has been not only more eye opening, but also a firm support of my arguments (since you basically think like me, and I've been trying to make them understand that) against my bosses in the near future. I am in a difficult situation work-wise, with my coworkers and bosses (since I'm basically in the middle) being like two gangs, so all this input is greatly, greatly appreciated. It is true that I was looking for a technical solution to a people problem - both the management and the employees are the problem, so it can't be solved that way (I was thinking about some code obfuscation , perhaps working with separate modules, etc., but that wouldn't work from my developer POV). The main problem is the culture inside and outside the company - development is not taken seriously in my country (Venezuela) so naivity and paranoia are in fact a real issue in here. The real answer here is an NDA (something that here in Venezuela doesn't completely work), because that's the people solution, because no sane developer would work in those conditions. Things will get ugly, but I think I will be able to handle that because of your help. Thank you all a lot! <3
This is one of the situations where you are looking for a technical solution to a social problem . A social problem should require a social solution, which, in this case, takes two complementary forms and an additional organizational solution which may help: Trust. If you don't trust developers, don't hire them. Working with people you don't trust is synonymous of failure. Relations based on mistrust require a lot of formalism, which may severely impact not only the productivity of your employees, but also the number of persons ready to work with you. Chances are, the best developers will avoid your company at all costs. NDA. Trusting someone doesn't mean you shouldn't take legal precautions. Those precautions can take a form of a contract or a NDA clause with severe consequences for the employee in a case of a disclosure. How severe are the consequences depends on who you are. Government organizations, terrorists or mafia can permit some deterrent ones. Ordinary companies may be limited, by law, to financial ones only. Slicing. Trust and contracts are a good start, but we can do better. If the sensitive part of the code base can be sliced so that two or more parts are required for the product to function, make sure that the developer from department 1 never sees the source code developed in department 2, and vice versa. People from one department shouldn't be able to meet people from other departments, and ideally, they shouldn't even be able to guess what other departments are doing, nor how much departments are there. Each person knows only a small part, which is not enough to have an entire picture (and reconstruct an entire product outside the organization). Those were social and organizational measures. Now, technically speaking, there is nothing you can do. You may try to: Force the developers to work in a closed room on a machine which is not connected to the internet and doesn't have USB ports. Install cameras which monitor everything which happens in the room, with several security officers constantly observing the developers working. Strip-search every developer each time he leaves the room to be sure he don't have any electronic device which can hold the code. Require every developer to have an ankle monitor. The device will listen to what they say, record their position and attempt to detect any electronic device nearby. If the developer was near a device which is not identified and doesn't have your tracking software installed on it, private investigators and hackers may attempt to check whether the developer wasn't using the device to leak information. Forbid developers to leave your buildings, unless being under heavy surveillance, and to interact in any way with the outside world. Some or all those measures are illegal in many countries (unless you represent some government agencies ), but the worst part is that even with all those measures in place, developers will be able to get the code, for example by discretely writing it on their skin or on a piece of paper and hiding it in their clothes, or simply memorizing it if they have Eidetic memory . Or they can just globally memorize the data structures and the algorithms—that is the only important thing where intellectual property matters—and create their own product inspired by those two things.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/170246", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/42155/" ] }
170,393
As I understand, the term "Backporting" is used to describe a fix which is applied in a future version which is also ported to a previous version. Wikipedia definition is as follows: Backporting is the action of taking a certain software modification (patch) and applying it to an older version of the software than it was initially created for. It forms part of the maintenance step in a software development process... For example: A problem is discovered and fixed in V2.0. The same fix is ported and applied to V1.5. What is the term when this is done in the opposite direction? The problem is discovered and fixed in V1.5. The same fix is ported and applied to V2.0. Would the term "Backporting" still apply? Or is there a term such as "Forwardporting" (which amusingly sounds a lot like "Port Forwarding")?
It's the same as the opposite of a backslash. Everyone wants to call it a forward slash, but really it's just a "slash." The opposite of backporting is simply "porting."
{ "source": [ "https://softwareengineering.stackexchange.com/questions/170393", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/69153/" ] }
170,440
My coworkers and I have been bending our minds to figuring out why anyone would go out of their way to program numbers in a base other than base 10. I suggested that perhaps you could optimize longer equations by putting the variables in the correct base you are working with (for instance, if you have only sets of 5 of something with no remainders you could use base 5), but I'm not sure if that's true. Any thoughts?
The usual reason for writing numbers, in code, in other than base 10, is because you're bit-twiddling. To pick an example in C (because if C is good for anything, it's good for bit-twiddling), say some low-level format encodes a 2-bit and a 6-bit number in a byte: xx yyyyyy : main() { unsigned char codevalue = 0x94; // 10 010100 printf("x=%d, y=%d\n", (codevalue & 0xc0) >> 6, (codevalue & 0x3f)); } produces x=2, y=20 In such a circumstance, writing the constants in hex is less confusing than writing them in decimal, because one hex digit corresponds neatly to four bits (half a byte; one 'nibble'), and two to one byte: the number 0x3f has all bits set in the low nibble, and two bits set in the high nibble. You could also write that second line in octal: printf("x=%d, y=%d\n", (codevalue & 0300) >> 6, (codevalue & 077)); Here, each digit corresponds to a block of three bits. Some people find that easier to think with, though I think it's fairly rare these days.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/170440", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/69239/" ] }
170,463
I searched on the web how to access in an efficient manner to a central database at a remote location and I met suggestions to use web services instead direct access (i.e. JDBC etc ) to a database.I wonder the reason of that and any other suggestions.
Adding a web service layer gives you an opportunity to make your client more lightweight, both in terms of the required CPU power and the bandwidth used during the processing. Both factors are extremely important to end-users: Using less CPU increases the battery life, Using less bandwidth reduces monthly payments for users with metered plans By introducing a web application layer you move the bulk of the processing from a hand-held mobile low-power, low bandwidth, low-memory client to a plugged-in, high-power high-bandwidth, server that has more memory than it needs - an environment where processing and communications cost a fraction of what they cost on a client. But wait, there is something in it for you as well: by splitting the system you get more control over your business rules, the structure of your database, and the versions of what's out there. Once you let a mobile client connect directly to the database, your design is "married" to that database structure: almost any change would break backward compatibility to a client that may be reluctant to upgrade his app. In contrast, adding a web service in between lets you evolve the interface to mobile clients in more manageable ways: for example, you could keep the old interface in place, add a new one that works "in parallel" with it, and then entirely restructure your database without breaking a single client. If you follow some pretty basic design principles while designing your web service, you could also get significant benefits by reusing mature server-side infrastructure that has been put in place: for example, you can get cache and proxy services for free. Finally, this will open the door to other developers exposing your application to platforms that you could not service yourself, ultimately playing to your company's advantage.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/170463", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/57822/" ] }
170,522
I see more and more articles about logging in JSON. You can also find one on NodeJS blog. Why does everyone like it so much? I can only see more operations getting involved: A couple new objects being created. Stringifying objects, which either involves calculating string length or multiple string allocations. GCing all the crap that was created. Is there any test on performance when using JSON logging and regular string logging? Do people use JSON (for logging) in enterprise projects?
JSON logging gives you the ability to parse the log file programmatically even if the format has changed in time . A good example is Apache logs. By default Apache uses common format for access.log: "%h %l %u %t \"%r\" %>s %b" Say that you have built an offline parser that takes one of those log files and calculates some statistics from it. At some moment you introduce subdomains to your application and include virtual_host to your logs (just so you can debug if problems appear with one of the subdomains): "%v %h %l %u %t \"%r\" %>s %b" Your parser doesn't make use of the virtual_hosts , but you still need to adapt your parser to: accept the new log format (notice the %v at the head of log format) still support the old log format (for older log files) But if you log in JSON , your parser won't even notice the added field and can happily parse the new logs as well as old logs. And some other parser can make use of the added fields if they exist . And of course for you , parsing JSON is easier than writing regexps to parse string logs.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/170522", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/53462/" ] }
170,579
I'm only going on what I've read on SO, so forgive me, but all I read says that one major advantage of Git over Subversion is that Git gives all the source code to the developer locally, not having to do anything on the server. With my limited using of SVN and TortoiseSVN, I had all the source code, or at least I thought I did. For example, I have a website. I upload it to SVN. I am still running my website locally, aren't I? If someone submits a change and I'm not connected, it wouldn't matter if I had Git or not, until I reconnect to the server. I do not understand. I'm not asking for a rehash of one vs. the other except this one point.
The premise you are questioning really is wrong: that one major advantage of Git over Subversion is that Git gives all the source code to the developer locally With both Subversion and Git you have your source code locally. With Git you have both your source code and a repository on your local machine. It goes something like this. Subversion: Your code <-> The Repository Git: Your code <-> Your local repository <-> A remote repository (... <-> another remote repo, and so on) One benefit you get from this structure is that you can still use source control and commit your local changes to your local repository without disturbing the work of other team members (with whom you share the remote repository). With Subversion you'd have to either risk breaking the build for other people or suffer prolonged local development without any source control which ends with a huge commit (or more likely a revert). With Git, on the other hand, you'd feel free to commit these changes to your local repository, view logs and diffs or your changes, and only when you feel it is ready to be shared with the team push the changes from the local repository to the remote one.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/170579", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/8802/" ] }
170,594
In Scott Chacon's workflow (explained eg in this SO answer ), with essentially two silos ( development , and master ), if, say I have a small bug to fix (e.g. can be fixed with a few characters) is the optimal way of doing that: a) branch off of development a branch called e.g. fix_123 . Push this branch to origin as I work on it. When it's done, code-reviewed, whatever, merge into development and push development to origin. b) Same as above, but without pushing fix_123 to origin.
The premise you are questioning really is wrong: that one major advantage of Git over Subversion is that Git gives all the source code to the developer locally With both Subversion and Git you have your source code locally. With Git you have both your source code and a repository on your local machine. It goes something like this. Subversion: Your code <-> The Repository Git: Your code <-> Your local repository <-> A remote repository (... <-> another remote repo, and so on) One benefit you get from this structure is that you can still use source control and commit your local changes to your local repository without disturbing the work of other team members (with whom you share the remote repository). With Subversion you'd have to either risk breaking the build for other people or suffer prolonged local development without any source control which ends with a huge commit (or more likely a revert). With Git, on the other hand, you'd feel free to commit these changes to your local repository, view logs and diffs or your changes, and only when you feel it is ready to be shared with the team push the changes from the local repository to the remote one.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/170594", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/69378/" ] }
170,609
I recently saw this question over at math.SE. It got me thinking. Could Pi be used as a crude random number generator? I mean the results are well known(how long has pi been computed to now?) but, Pi does seem to be quite random when taken 1 digit at a time. Does this make any sense at all?
Digging from http://www.befria.nu/elias/pi/binpi.html to get the binary value of pi (so that it was easier to convert into bytes rather than trying to use decimal digits) and then running it through ent I get the following for an analysis of the random distribution of the bytes: Entropy = 7.954093 bits per byte. Optimum compression would reduce the size of this 4096 byte file by 0 percent. Chi square distribution for 4096 samples is 253.00, and randomly would exceed this value 52.36 percent of the times. Arithmetic mean value of data bytes is 126.6736 (127.5 = random). Monte Carlo value for Pi is 3.120234604 (error 0.68 percent). Serial correlation coefficient is 0.028195 (totally uncorrelated = 0.0). So yes, using pi for random data would give you fairly random data... realizing that it is well known random data. From a comment above... Depending on what you are doing, but I think you can use the decimals of the square root of any prime number as a random number generator. These should at least have evenly distributed digits. – Paxinum So, I computed the square root of 2 in binary to undetake the same set of problems. Using Wolfram's Iteration I wrote a simple perl script #!/usr/bin/perl use strict; use Math::BigInt; my $u = Math::BigInt->new("2"); my $v = Math::BigInt->new("0"); my $i = 0; while(1) { my $unew; my $vnew; if($u->bcmp($v) != 1) { # $u <= $v $unew = $u->bmul(4); $vnew = $v->bmul(2); } else { $unew = ($u->bsub($v)->bsub(1))->bmul(4); $vnew = ($v->badd(2))->bmul(2); } $v = $vnew; $u = $unew; #print $i," ",$v,"\n"; if($i++ > 10000) { last; } } open (BITS,"> bits.txt"); print BITS $v->as_bin(); close(BITS); Running this for the first 10 matched A095804 so I was confident I had the sequence. The value v n as when written in binary with the binary point placed after the first digit gives an approximation of the square root of 2. Using ent against this binary data produces: Entropy = 7.840501 bits per byte. Optimum compression would reduce the size of this 1251 byte file by 1 percent. Chi square distribution for 1251 samples is 277.84, and randomly would exceed this value 15.58 percent of the times. Arithmetic mean value of data bytes is 130.0616 (127.5 = random). Monte Carlo value for Pi is 3.153846154 (error 0.39 percent). Serial correlation coefficient is -0.045767 (totally uncorrelated = 0.0).
{ "source": [ "https://softwareengineering.stackexchange.com/questions/170609", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/483/" ] }
170,704
Possible Duplicate: (Why) Should I learn a new programming language? I came across a line in this article which is, Learn one programming language every year Why do good programmers suggest to learn more programming languages . We can be a jack of all and master of none too in this case.
I have not read the article, but I have seen this specific piece of advice many times over, it is common knowledge. The original piece of advice, does not refer to mastery that comes after years of experience, but to exposure on different paradigms that will provide the individual with different approaches on solving problems. So there is a footnote somewhere missing that would say: "Different programming paradigms". A good but not so relevant post on how different approaches to programming affect problem solving is this post by Steve Yegge Notes from the Mystery Machine Bus .
{ "source": [ "https://softwareengineering.stackexchange.com/questions/170704", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/-1/" ] }
170,731
Possible Duplicate: Where do you draw the line for your perfectionism? I see that the development community is very focused on doing things the right way and personally I would like to do the same too, however, is it a good or bad idea for a newbie to focus on design principles, design patterns, and commenting code when getting started, or is it better to let creativity run wild and potentially write sloppy code. Where should a newbie draw the line?
Perfectionism is a poor developer's excuse for not getting the work done. Larry Wall, in the glossary of the first edition of Programming Perl , famously notes: We will encourage you to develop the three great virtues of a programmer: laziness, impatience, and hubris. In the second edition of the book the terms are further defined: Laziness: The quality that makes you go to great effort to reduce overall energy expenditure. It makes you write labor-saving programs that other people will find useful, and document what you wrote so you don't have to answer so many questions about it. Hence, the first great virtue of a programmer. Also hence, this book. See also impatience and hubris. (p.609) Impatience: The anger you feel when the computer is being lazy. This makes you write programs that don't just react to your needs, but actually anticipate them. Or at least pretend to. Hence, the second great virtue of a programmer. See also laziness and hubris. (p.608) Hubris: Excessive pride, the sort of thing Zeus zaps you for. Also the quality that makes you write (and maintain) programs that other people won't want to say bad things about. Hence, the third great virtue of a programmer. See also laziness and impatience. (p.607) While Perl is not everyone's cup of tea, the ideas expressed in the above quote apply to programming in general. Very recently I assigned what I thought was an easy task to a relatively new developer, and gave him explicit guidelines to skip certain "best practices". The task was performance sensitive, a background process that would receive a huge lump of XML from a webservice (that we didn't control), transform the data, do a couple of simple checks and then store them in our database. The size of the XML data was huge and we expected it to get gigantic in the very near future, this thing needed to be fast. Really fast. What he came up with, a week later, can only be described as an over-engineered piece of crap (honest assessment). It worked, flawlessly, for some smallish XML files I've given him to test the data transformations on, but when hooked up to the actual webservice it just couldn't cope, stuff started crashing at random, as they typically do when a background process faces performance issues. When I reviewed his code I found out, to my horror, that he had completely ignored me, the code was full of... design patterns! Poor noob had beautifully scribed an almost academic example of the strategy pattern for what was essentially a smallish switch statement , amongst other things. Now, don't get me wrong, he applying patterns wasn't the root cause of the performance issues we were facing, however he had spent all week trying to write the "best" code possible when he should have quickly hacked the code in a day or two, and then spend the rest of the time stress testing the damn thing and identifying bottlenecks. And now I had to dig through thousands of uneccessary lines of code to even start getting an idea what was going on. No chance of me wasting my time like that, just scrapped the whole thing and wrote it from scratch in a few hours. I discussed his code with him, and my conclusions are: He had just discovered some of the concepts he applied, and was too eager to see them in action, and He, unknowingly, wanted to impress (not me, but the team in general). Enthusiasm is great, rushing to apply concepts you might not yet fully understand in your code is not. Even if you do fully understand them, more often than not you don't actually need them. One of the guiding principles of Extreme Programming is "You ain't gonna need it" ( YAGNI ), and it's a principle that I always try to pass to newer developers. Ron Jeffries, one of the founders of XP, explains : Often you will be building some class and you’ll hear yourself saying "We’re going to need...". Resist that impulse, every time. Always implement things when you actually need them, never when you just foresee that you need them. Here’s why: Your thoughts have gone off track. You’re thinking about what the class might be, rather than what it must be. You were on a mission when you started building that class. Keep on that mission rather than let yourself be distracted for even a moment. Your time is precious. Hone your sense of progress to focus on the real task, not just on banging out code. You might not need it after all. If that happens, the time you spend implementing the method will be wasted; the time everyone else spends reading it will be wasted; the space it takes up will be wasted. You find that you need a getter for some instance variable. Fine, write it. Don’t write the setter because "we’re going to need it". Don’t write getters for other instance variables because "we’re going to need them". The best way to implement code quickly is to implement less of it. The best way to have fewer bugs is to implement less code. You’re not gonna need it! Several of my answers on Programmers advocate for best practices, however I hope I haven't mislead anyone into thinking that applying those best practices is a goal in itself. The goal is to get things done, done is better than perfect . Don't just blindly follow dogma, without a deep understanding of the concepts and practices, and an itch to scratch you are just setting yourself for a world of trouble. Keep it simple, stupid! Further reading: Where do you draw the line for your perfectionism? The quest for perfection Why can’t perfectionists break the habit? The Done Manifesto Programming, Motherfucker
{ "source": [ "https://softwareengineering.stackexchange.com/questions/170731", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/27998/" ] }
170,740
I very often hear the following: "If you want to test private methods, you'd better put that in another class and expose it." While sometimes that's the case and we have a hiding concept inside our class, other times you end up with classes that have the same attributes (or, worst, every attribute of one class become a argument on a method in the other class) and exposes functionality that is, in fact, implementation detail. Specially on TDD, when you refactor a class with public methods out of a previous tested class, that class is now part of your interface, but has no tests to it (since you refactored it, and is a implementation detail). Now, I may be not finding an obvious better answer, but if my answer is the "correct", that means that sometimes writting unit tests can break encapsulation, and divide the same responsibility into different classes. A simple example would be testing a setter method when a getter is not actually needed for anything in the real code. Please when aswering don't provide simple answers to specific cases I may have written. Rather, try to explain more of the generic case and theoretical approach. And this is neither language specific. Thanks in advance. EDIT: The answer given by Matthew Flynn was really insightful, but didn't quite answer the question. Although he made the fair point that you either don't test private methods or extract them because they really are other concern and responsibility (or at least that was what I could understand from his answer), I think there are situations where unit testing private methods is useful. My primary example is when you have a class that has one responsibility but the output (or input) that it gives (takes) is just too complex. For example, a hashing function. There's no good way to break a hashing function apart and maintain cohesion and encapsulation. However, testing a hashing function can be really tough, since you would need to calculate by hand (you can't use code calculation to test code calculation!) the hashing, and test multiple cases where the hash changes. In that way (and this may be a question worth of its own topic) I think private method testing is the best way to handle it. Now, I'm not sure if I should ask another question, or ask it here, but are there any better way to test such complex output (input)? OBS: Please, if you think I should ask another question on that topic, leave a comment. :) EDIT2: I accepted an answer as my correct because it made me think and decide my course of action, although it didn't answered my question completely. But for those who face the same problem as I do (one cohesive class that will change together, but is still too hard to test by itself), I'll tell what I did and why. I decided that the output of that class is simply too hard to a computer to test correctly, so I didn't test it. I could have used a framework to test it's private methods (which would be the best idea, I think), but I didn't want to get to that point. If you are wondering what it is that is cohesive and respects SRP, and still is too hard to a computer to test, I'll give some examples: heightmap generation, hashing functions, procedural music generation (you can test some units, but the highest level unit is simply too subjective).
When I was younger and raised this question, I was told that I really don't need to write unit tests for private methods. Surprisingly, this turned out to be true. If you take a behavioral driven approach to unit tests, this makes perfect sense--your public methods are what are being asked to do stuff. The fact that they call internal private methods is orthogonal to their outward behavior. Because the other method is private/encapsulated, it really is part of the unit being tested. You may ask that if multiple methods in your class all call a single private method, so shouldn't you be testing that that one method works? The answer here is yes, but it is not made evident by a unit test of the private method, but rather the unit tests of the public methods calling it. If the public methods work, and they call a private method, then the private method must also work.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/170740", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/69500/" ] }
170,808
Some colleagues of mine have told me that having business logic in stored procedures in the database violates the three-tier separation architecture, since the database belongs to the data layer whereas stored procedures are business logic. I think the world would be a very grim place without stored procedures. Do they really violate the three-tier separation ?
Your colleagues are conflating architecture with implementation. The idea behind a multi-tiered application is simply that it's broken up into parts that encapsulate certain kinds of processing (storage, business logic, presentation) and communicate with each other using well-defined interfaces. Just as it's possible to successfully do things that resemble object-oriented programming in non-object-oriented languages, it's possible to do the same with multiple tiers within one environment, such as a database server. What doing either of those successfully have in common is a need for care, discipline and an understanding of the compromises involved. Let's look at a three-tiered application where two of the tiers have been implemented on a database: Data Tier: Consists of database tables accessed using the four standard table operations ( INSERT , UPDATE , DELETE and SELECT ). Logic Tier: Consists of stored procedures that implement only business logic and access the data tier using only the methods outlined above. Presentation Tier: Consists of a web server running code that accesses the logic tier by making only stored procedure calls. This is a perfectly-acceptable model, but it comes with some tradeoffs. The business logic is implemented in a way that gives it fast, easy access to the data tier and may allow doing things that would have to be done "the hard way" by a logic tier outside the database. What you give up are the ability to easily move either tier to some other bit of technology and carefree implementation (i.e., you have to be extra careful that the tiers don't use facilities that are available in the database but outside their defined interfaces). Whether or not this kind of thing and the tradeoffs it brings are acceptable in a given situation is something you and your colleagues have to determine using your judgment.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/170808", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/61852/" ] }
170,880
I used to be a fan of requiring XML comments for documentation. I've since changed my mind for two main reasons: Like good code, methods should be self-explanatory. In practice, most XML comments are useless noise that provide no additional value. Many times we simply use GhostDoc to generate generic comments, and this is what I mean by useless noise: /// <summary> /// Gets or sets the unit of measure. /// </summary> /// <value> /// The unit of measure. /// </value> public string UnitOfMeasure { get; set; } To me, that's obvious. Having said that, if there were special instructions to include, then we should absolutely use XML comments. I like this excerpt from this article : Sometimes, you will need to write comments. But, it should be the exception not the rule. Comments should only be used when they are expressing something that cannot be expressed in code. If you want to write elegant code, strive to eliminate comments and instead write self-documenting code. Am I wrong to think we should only be using XML comments when the code isn't enough to explain itself on its own? I believe this is a good example where XML comments make pretty code look ugly. It takes a class like this... public class RawMaterialLabel : EntityBase { public long Id { get; set; } public string ManufacturerId { get; set; } public string PartNumber { get; set; } public string Quantity { get; set; } public string UnitOfMeasure { get; set; } public string LotNumber { get; set; } public string SublotNumber { get; set; } public int LabelSerialNumber { get; set; } public string PurchaseOrderNumber { get; set; } public string PurchaseOrderLineNumber { get; set; } public DateTime ManufacturingDate { get; set; } public string LastModifiedUser { get; set; } public DateTime LastModifiedTime { get; set; } public Binary VersionNumber { get; set; } public ICollection<LotEquipmentScan> LotEquipmentScans { get; private set; } } ... And turns it into this: /// <summary> /// Container for properties of a raw material label /// </summary> public class RawMaterialLabel : EntityBase { /// <summary> /// Gets or sets the id. /// </summary> /// <value> /// The id. /// </value> public long Id { get; set; } /// <summary> /// Gets or sets the manufacturer id. /// </summary> /// <value> /// The manufacturer id. /// </value> public string ManufacturerId { get; set; } /// <summary> /// Gets or sets the part number. /// </summary> /// <value> /// The part number. /// </value> public string PartNumber { get; set; } /// <summary> /// Gets or sets the quantity. /// </summary> /// <value> /// The quantity. /// </value> public string Quantity { get; set; } /// <summary> /// Gets or sets the unit of measure. /// </summary> /// <value> /// The unit of measure. /// </value> public string UnitOfMeasure { get; set; } /// <summary> /// Gets or sets the lot number. /// </summary> /// <value> /// The lot number. /// </value> public string LotNumber { get; set; } /// <summary> /// Gets or sets the sublot number. /// </summary> /// <value> /// The sublot number. /// </value> public string SublotNumber { get; set; } /// <summary> /// Gets or sets the label serial number. /// </summary> /// <value> /// The label serial number. /// </value> public int LabelSerialNumber { get; set; } /// <summary> /// Gets or sets the purchase order number. /// </summary> /// <value> /// The purchase order number. /// </value> public string PurchaseOrderNumber { get; set; } /// <summary> /// Gets or sets the purchase order line number. /// </summary> /// <value> /// The purchase order line number. /// </value> public string PurchaseOrderLineNumber { get; set; } /// <summary> /// Gets or sets the manufacturing date. /// </summary> /// <value> /// The manufacturing date. /// </value> public DateTime ManufacturingDate { get; set; } /// <summary> /// Gets or sets the last modified user. /// </summary> /// <value> /// The last modified user. /// </value> public string LastModifiedUser { get; set; } /// <summary> /// Gets or sets the last modified time. /// </summary> /// <value> /// The last modified time. /// </value> public DateTime LastModifiedTime { get; set; } /// <summary> /// Gets or sets the version number. /// </summary> /// <value> /// The version number. /// </value> public Binary VersionNumber { get; set; } /// <summary> /// Gets the lot equipment scans. /// </summary> /// <value> /// The lot equipment scans. /// </value> public ICollection<LotEquipmentScan> LotEquipmentScans { get; private set; } }
If your comments only look like this: /// <summary> /// Gets or sets the sublot number. /// </summary> /// <value> /// The sublot number. /// </value> Then yes, they are not all that useful. If they read something like this: /// <summary> /// Gets or sets the sublot number. /// Note that the sublot number is only used by the legacy inventory system. /// Latest version of the online inventory system does not use this, so you can leave it null. /// Some vendors require it but if you don't set it they'll send a request for it specifically. /// </summary> /// <value> /// The sublot number. /// </value> Then I'd say they have value. So to answer your question: Comments are necessary when they say something that the code does not say. An exception: it is good to have comments on anything that is publicly accessible if you're writing a library/API that will be available to the public. I hate using a library and seeing a function named getAPCDGFSocket() with no explanation of what an APCDGFSocket is (I'd be happy with something as simple as This gets the Async Process Coordinator Data Generator File Socket ). So in that case, I'd say use some tool to generate all comments and then manually tweak the ones that need it (and please make sure your cryptic acronyms are explained). Also, getters/setters are generally bad examples for "are comments necessary?" because they are usually quite obvious and comments aren't necessary. Comments are more important on functions that perform some algorithm where some explanation of why things are being done they way they are could make the code much more understandable and also make it easier for future programmers to work with. ...and finally, I'm pretty sure that this question is relevant for all styles of comments, not just those that are formatted using XML (which you are using because you're working in a .NET environment).
{ "source": [ "https://softwareengineering.stackexchange.com/questions/170880", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/29526/" ] }
170,935
All this started when I was looking for a way to test my webpage for JavaScript conformance like the W3C HTML Validator . I have not found one yet. So let me know if you know of any... I looked for the official JavaScript page and find ECMA Script . These people have standardized a scripting language (I do not feel like calling it JavaScript anymore!) and called it ECMA-262 ( Wikipedia ). Their latest work is Edition 5.1 JavaScript was developed by Mozilla Corporation and their last stable version is 1.8.5 ( see this ) which is based on the ECMA's edition 5.1 The Wikipedia page linked mentions dialects. Mozilla's JavaScript 1.8.5 is listed as a dialect along with JScript 9 (IE) and JavaScript (Chrome's V8 [ Wiki ]) and a lot others. Am I to understand that JavaScript 1.8.5 is a derivative of the ECMA-262 and SpiderMonkey [ Wiki ] is an engine that runs it? And Chrome has its own dialect and V8 engine is the program that runs it? With all these dialects based off ECMA-262, what I can no longer understand is "What is JavaScript "? Are there any truly cross browser scripting languages? Do the various implementers come together to agree on the dialect cross compatibility? Is this effort ECMA?
Pretty much all mature languages are defined by a specification, and compilers or interpreters attempt to follow the standard defined in that specification. But very rarely do they succeed, unless the standard is defined by the author of the language. You can find the C++ 2003 standard , the C# 4 specification , the Java 7 specification and many more online. Many of these have ECMA or ISO standardisation numbers. These are just organisations with which you can register a standard and make it more official. Ruby has historically done things a little differently, having an executable set of tests as a specification . So, if you want to write an interpreter and call it standard Ruby, you just had to create an interpreter that passed all of those tests. But even Ruby is likely to become a more formal specification eventually. Javascript is no different, except possibly in the way it has evolved. Javascript was first created by Netscape. They called it LiveScript, but it looked similar to Java and they cut a deal with Sun over the name, which benefited both the marketing of Netscape and Java. Microsoft had VBScript and (for reasons probably lost to conjecture) basically copied Javascript, but the name was owned by Sun, so they cheekily called it JScript. But JScript, while being very similar to Javascript in syntax, made a lot of use of COM -- for example, IE5 and 6 instantiate an XMLHttpRequest object using new ActiveXObject("Microsoft.XMLHTTP"); . And so, parallel, similar but also different "dialects" of Javascript were born. Over time, various groups owning browsers with less market share than IE have tried to standardise the language, and for years Microsoft resisted. Until V8. V8 was fast. It set a whole new market standard. It made everything else look poor. And, through various antitrust cases against Microsoft, IE was losing market share. Suddenly, it was in Microsoft's interest to support standardisation. We're not there yet, but it's on the right track. Meanwhile, V8 was open-source, which allowed people to start thinking up new uses for a fast Javascript parser, such as Node.JS . But, to go back to your question: What is Javascript? It's the common (and original) name for ECMAScript, a specification for a prototypical language commonly, but not exclusively, used for navigating and manipulating the domain object model in a broswer. ECMA-262 is just the standard definition, like ECMA-334 is the standard definition for C#. ECMAScript was the only name that all the interested parties could agree on, back in '99, when ECMA-262 was written.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/170935", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/27217/" ] }
170,939
I've been programming in C++ for a while now, but mostly things centered around the low-level features of C++. By that I mean mostly working with pointers and raw arrays. I think this behavior is known as using C++ as C with classes. Despite this, I only having tried C recently for the first time. I was pleasantly surprised how languages like C# and Java hide these details away in convenient standard library classes like Dictionaries and Lists. I'm aware that the C++ standard library has many containers like vectors, maps and strings as well and C++11 only adds to this by having std:: array and ranged loops. How do I best learn to make use of these modern language features and which are suitable for which moments? Is it correct that software engineering in C++ nowadays is mostly free of manual memory management? Lastly, which compiler should I use to make the most of the new standard? Visual Studio has excellent debugging tools, but even VS2012 seems to have terrible C++11 support.
First, some rules of thumb: Use std::unique_ptr as a no-overhead smart pointer. You shouldn’t need to bother with raw pointers all that often. std::shared_ptr is likewise unnecessary in most cases. A desire for shared ownership often betrays a lack of thought about ownership in the first place. Use std::array for static-length arrays and std::vector for dynamic. Use generic algorithms extensively, in particular: <algorithm> <numeric> <iterator> <functional> Use auto and decltype() wherever they benefit readability. In particular, when you want to declare a thing, but of a type that you don’t care about such as an iterator or complex template type, use auto . When you want to declare a thing in terms of the type of another thing, use decltype() . Make things type-safe when you can. When you have assertions that enforce invariants on a particular kind of thing, that logic can be centralised in a type. And this doesn’t necessarily make for any runtime overhead. It should also go without saying that C-style casts ( (T)x ) should be avoided in favour of the more explicit (and searchable!) C++-style casts (e.g., static_cast ). Finally, know how the rule of three: Destructor Copy constructor Assignment operator Has become the rule of five with the addition of the move constructor and move assignment operator. And understand rvalue references in general and how to avoid copying. C++ is a complex language, so it’s difficult to characterise how best to use all of it. But the practices of good C++ development haven’t changed fundamentally with C++11. You should still prefer memory-managed containers over manual memory management—smart pointers make it easy to efficiently do this. I would say that modern C++ is indeed mostly free of manual memory management—the advantage to C++’s memory model is that it’s deterministic , not that it’s manual. Predictable deallocations make for more predictable performance. As for a compiler, G++ and Clang are both competitive in terms of C++11 features, and rapidly catching up on their deficiencies. I don’t use Visual Studio, so I can speak neither for nor against it. Finally, a note about std::for_each : avoid it in general. transform , accumulate , and erase – remove_if are good old functional map , fold , and filter . But for_each is more general, and therefore less meaningful—it doesn’t express any intent other than looping. Besides that, it’s used in the same situations as range-based for , and is syntactically heavier, even when used point-free. Consider: for (const auto i : container) std::cout << i << '\n'; std::for_each(container.begin(), container.end(), [](int i) { std::cout << i << '\n'; }); for (const auto i : container) frobnicate(i); std::for_each(container.begin(), container.end(), frobnicate);
{ "source": [ "https://softwareengineering.stackexchange.com/questions/170939", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/57949/" ] }
170,969
Today my lecturer mentioned the reason why the aircraft system is programmed in assembly language is due to the program being written have less error . Is this statement true ? Because when he asked about our opinion I said assembly can create faster program thus it is a good language for real-time oriented aircraft system program . I search around google but can't seem to find an article clarifying my lecturer's statement .
Your lecturer's statement is provably false. The Joint Strike Fighter 's control code is written in C++. The 777 from Boeing uses 99%+ ADA . The JPL uses mostly C to drive spaceships. I'm sure there are more examples but I suspect many are proprietary or classified. Here is a paper that goes into some detail on the subject of testing avionics software on a more general level.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/170969", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/60276/" ] }
171,008
Cloud computing is a model of renting resources - servers and data storage. Both servers and data storage have been around for much more than a decade so far. Yet cloud computing offers only appeared several years ago. What's the deal here? What was the critical change that triggered massive adoption and massive marketing of cloud computing offers?
It has appeared earlier. In fact, this was the original model of getting access to computing resources back in the 1950s till well into the 1980s, when it was called "time sharing", then in the early 1990s it re-appeared under the name "Client/Server", then in the late 1990s again under the name "Thin Client", then "Application Service Provider". However, in the exact form we see it today it requires high quality, high reliability, high throughput, low latency, low price, ubiquitous Internet access, which didn't exist until a few years ago, and in fact, still doesn't exist for the vast majority of people (e.g. almost all of Africa, much of Asia, parts of Eastern Europe and South America).
{ "source": [ "https://softwareengineering.stackexchange.com/questions/171008", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/587/" ] }
171,024
It's an idea I've heard repeated in a handful of places. Some more or less acknowledging that once trying to solve a problem purely in SQL exceeds a certain level of complexity you should indeed be handling it in code. The logic behind the idea is that for the large majority of cases, the database engine will do a better job at finding the most efficient way of completing your task than you could in code. Especially when it comes to things like making the results conditional on operations performed on the data. Arguably with modern engines effectively JIT'ing + caching the compiled version of your query it'd make sense on the surface. The question is whether or not leveraging your database engine in this way is inherently bad design practice (and why). The lines become blurred further when all the logic exists inside the database and you're just hitting it via an ORM.
In layman's words: These are things that SQL is made to do and, believe it or not, I've seen done in code: joins - codewise it'd require complex array manipulation filtering data (where) - codewise it'd require heavy inserting and deleting of items in lists selecting columns - codewise it'd require heavy list or array manipulation aggregate functions - codewise it'd require arrays to hold values and complex switch cases foreign key integrity - codewise it'd require queries prior to insert and assumes nobody will use the data outside app primary key integrity - codewise it'd require queries prior to insert and assumes nobody will use the data outside app Doing these things instead of relying in SQL or the RDBMS leads to writing tons of code with no added value , meaning more code to debug and maintain. And it dangerously assumes the database will only be accessed via the application.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/171024", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/69733/" ] }
171,038
So, I am having to deal with seemingly archiac language (called PowerOn) where I have a main method, a few datatypes to define variables with, and has the ability to have sub-procedures (essentially void methods) that does not return a type nor accepts any arguements. The problem here is that EVERYTHING is global. I've read of these type of languages, but most books take the aproach "Ok, we use to use a horse and cariage, but now, here's a car so let's learn how to work on THAT!" We will NEVER relive those days" . I have to admit, the mind is struggling to think outside of scope and extent . Well here I am. I am trying to figure out how to best manage nothing but global variables across several open methods. Yep, even iterators for for loops have to be defined globaly, which I find myself recycling in different parts of my code. My Question: for those that have this type experience, how did programmers deal with a large amount of variables in a global playing field? I have feeling it just became a mental juggling trick, but I would be interested to know if there were any known aproaches.
You'll need some kind of mental bookkeeping tricks (naming conventions, etc) in order to keep it straight. Also, document, document, document. Since all variables are globals, have a single document with all of them listed, if you can. Try to have a small number of variables that you always use for temporaries, and remember that THEY ARE TEMPORARY. By constantly re-using the same ones, you'll get into the habit of keeping track of where they are valid or not. Also, you want to look at the documentation and make sure you know how long variable names can be, and how many characters are actually unique. I know NOTHING about PowerOn, but if it's archaic enough to have only global scope, then it's possible that it's got a limited uniqueness length on identifiers. I've seen things before with long identifiers, but whose identifiers were only unique in the first 8 characters. So you could have RonnyRayGun and RonnyRayBlaster and they are actually the SAME variable. In such cases I recommend keeping variable names under the 'unique' limit so that you're less likely to accidentally collide.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/171038", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/45101/" ] }
171,066
I have some projects that are in a very early development state. They are nowhere nearing completion but I do host them (as public repos) on GitHub because: I have multiple computers and I want access to my code everywhere I want a backup for my code I want it to be easy if someone wants to collaborate in some way I use GitHub Issues as a poor man's project management software Is it OK to publish a project on GitHub even when it is very early in the development? I am a bit concerned about someone to come by and say OMG this is total BS, this code is so bad! while looking at unpolished/still in development/not tested code. What are your practices when you start new public projects? Do you wait until you have something substantial to show or you create a bare repo directly on GitHub and start from there? I used GitHub throughout this post but this applies to every code hosting service out there.
Of course it is OK: it is hard to imagine that over 4,098,118 projects currently hosted on GitHub would all be 100% great and useful! You are not forcing anyone to use your code or even to look at it. If you host the project primarily for yourself, the quality of your code is of concern to you, and nobody else. You listed all the right reasons to host your project - backups, universal access, and possibility of collaboration with others are great reasons to start hosting as early as possible.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/171066", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/38148/" ] }
171,127
I'm really curious right now. I'm a Python programmer, and this question just boggled me: You write an OS. How do you run it? It has to be run somehow, and that way is within another OS? How can an application run without being in an OS? How do you tell the computer to run, say, C, and execute these commands to the screen, if it doesn't have an OS to run in? Does it have to do with a UNIX kernel? If so, what is a Unix kernel, or a kernel in general? I'm sure OSes are more complicated than that, but how does it work?
There are plenty of websites that go through the boot process (such as How Computers Boot Up ). In a nutshell, its a multi-stage process that keeps building up the system a little bit at a time until it can finally start the OS processes. It starts with the firmware on the motherboard which tries to get the CPU up and running. It then loads up the BIOS which is like a mini operating system that gets the other hardware up and running. Once that is done it looks for a boot device (disk, CD, etc) and, once found, it locates the MBR (master boot record) and loads it into memory and executes it. It's this little piece of code that then knows how to initialize and start the operating system (or other boot loaders as things have gotten more complicated). It's at this point that things like the kernel would be loaded and start running. It's pretty incredible that it works at all!
{ "source": [ "https://softwareengineering.stackexchange.com/questions/171127", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/69791/" ] }
171,129
On November 13, 2006, Sun released much of Java as free and open source software, (FOSS), under the terms of the GNU General Public License (GPL). On May 8, 2007, Sun finished the process, making all of Java’s core code available under free software/open-source distribution terms , aside from a small portion of code to which Sun did not hold the copyright. OpenJDK (Open Java Development Kit) is a free and open source implementation of the Java programming language. It is the result of an effort Sun Microsystems began in 2006. The implementation is licensed under the GNU General Public License (GNU GPL) with a linking exception. Why there are still people who say that Java is not open source or free as in free speech? Am I missing something? Is Java still proprietary?
The problem is that in order to call something "Java" you need to get it certified as compliant to the Java spec. One of the pre-requisites of getting this certification is running you JVM through a test suite - Java Technology Compatibility Kit (TCK) . This test suite is NOT open sourced. So you can build a JVM that behaves in a very Java like way and be completely open source, but, if you want to call it a "Java JVM" you need to buy the certification suite under a non open source license. To many open source advocates this is a complete non starter.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/171129", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/61852/" ] }