source_id
int64
1
4.64M
question
stringlengths
0
28.4k
response
stringlengths
0
28.8k
metadata
dict
39,855
Obviously, if management buy into spending time with code reviews, then everyone has to do it. But there are always those guys (or gals) who resist with every ounce of their being. How do you effectively manage dealing with this scenario when dealing with it as the peer reviewer?
He resists because of fear . This conditioning may be the result of previously bad experience(s) about being reviewed, as a kid, at school, at work or even in your current team. In our modern societies, it's very common for us to confuse someone's work output with his value as an human being. That why reviews at work are not well perceived. That's also why speaking in public in one of the most spread phobia (fear of judgement). To avoid such behavior, you will need some psychology. You must prove to his lizard brain it's not going to happen (he won't be judged, humiliated, killed, anything...) by desensitizing him to code reviews. One of the most effective method I found to unblock someone is to ask him to review your code , before asking to review his code. After a while, propose him to read his code to learn from it and why not suggest improvements. When you find something to change, be careful in what you write. He will understand there is nothing to be afraid of, and he will take the positive part of the reviewing process only: learning and increasing his knowledge .
{ "source": [ "https://softwareengineering.stackexchange.com/questions/39855", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/14363/" ] }
39,865
I love prototyping as a fast effective way to put a UI in front of a user. Many times though, management get their beaks in the way, and the prototype is dragged kicking and screaming into main stream development. How do you manage management into not doing this?
Use a tool like Microsoft SketchFlow , or make your prototype in some other language or platform, making it nearly impossible to integrate into main development. There's also a joelonsoftware essay about showing screenshots and prototypes, where he makes unimplemented and unworked aspects appear obviously broken/unimplemented, making it clear where work still needs to be done. Important Corollary Two. If you show a nonprogrammer a screen which has a user interface which is 100% beautiful, they will think the program is almost done. ... What can you do about this? Once you understand the Iceberg Secret, it's easy to work with it. Understand that any demos you do in a darkened room with a projector are going to be all about pixels. If you can, build your UI in such a way that unfinished parts look unfinished. For example, use scrawls for the icons on the toolbar until the functionality is there. As you're building your web service, you may want to consider actually leaving out features from the home page until those features are built. That way people can watch the home page go from 3 commands to 20 commands as more things get built. So, try making your prototypes in Photoshop instead of Visual Studio, or something along those lines.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/39865", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/14363/" ] }
40,100
An executive at my workplace asked me and my group of developers the question: How many lines of code can a C# developer produce per month? An old system was to be ported to C# and he would like this measure as part of the project planning. From some (apparently creditable) source he had the answer of "10 SLOC/ month " but he was not happy with that. The group agreed that this was nearly impossible to specify because it would depend on a long list of circumstances. But we could tell that the man would not leave (or be very disappointed in us) if we did not come up with an answer suiting him better. So he left with the many times better answer of "10 SLOC/ day " Can this question be answered? (offhand or even with some analysis)
Ask your executive how many pages of contract his lawyer can write per month. Then (hopefully) he will realize that there's a huge difference between writing a single-page contract and writing a 300-page contract without loopholes and contradictions. Or between writing a new contract and changing an existing one. Or between writing a new contract and translating it to a different language. Or to a different law system. Maybe he'll even agree that "pages of contract per unit of time" is not a very good measure for lawyer productivity. But to give you some answer to your real question: In my experience, for a new project a few hundred SLOC per day and developer aren't uncommon. But as soon as the first bugs appear, this number will drop sharply.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/40100", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/14475/" ] }
40,172
I often encounter this when I am helping out someone who is new to programming and learning it for the first time. I'm talking about really new newbies, still learning about OOness, constructing objects, method calls and stuff like that. Usually, they have the keyboard and I am just offering guidance. On the one hand, the autocomplete feature of the IDEs helps to give them feedback that they are doing it right and they quickly get to like and rely on it. On the other hand, I fear that early dependence on the IDE autocomplete would make them not really understand the concepts or be able to function if they one day find themselves only with a simple editor. Can anyone with more experience in this regard please share their opinion? Which is better for a newbie, autocomplete or manual typing? Update Thanks for the input everyone! Many answers seem to focus on the main use of autocomplete, like completing methods, providing methods lookup and documentation etc. But IDEs nowadays do a lot more like. When creating an object of List type, an IDE autocompletes to new ArrayList on right hand side. It may not be immediately clear to a newbie why it cannot be new List, but hey it works, so they move on. Filling method parameters based on local variables in context. Performing object casts Automatically adding 'import' or 'using' statements and much more. These are the kinds of things I mean. Remember I'm talking about people who are doing Programming 101, really just starting. I have watched the IDE do these things which they have no idea about, but they just carry on. One could argue that it helps them focus on program flow and getting the hang of things first before going in-depth and understanding the nuances of the language, but I'm not sure.
Understanding the concepts and memorising dozens of hundreds of stupid library classes and methods are two completely different things. Intellisense helps to kick all that useless knowledge off from your mind completely, and the earlier you do it, the better. Leave more space for the useful concepts, don't waste your limited resources on APIs. To answer an updated portion of a question: little syntax details, files layout, compiler and linker invocation are also unimportant compared to the generic programming concepts. Once they're understood a newbie-no-more can go into a deeper understanding of how the low level stuff actually works. It is better to do it when you already know the basics, otherwise chances are you'll pick up a number of dangerous magical superstitions. For example, DrScheme IDE has a great track record in teaching programming, and its success is mainly due to its ability to help to concentrate on what is really important.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/40172", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/1728/" ] }
40,216
Recently I've been working on projects that heavily use threading. I think that I'm OK at designing them; use stateless design as much as possible, lock access to all resources that more than one thread needs, etc. My experience in functional programming has helped that immensely. However, when reading other people's thread code, I get confused. I am debugging a deadlock right now, and since the coding style and design are different from my personal style, I am having a difficult time seeing potential deadlock conditions. What do you look for when debugging deadlocks?
If the situation is a real deadlock (i.e. two threads hold two different locks, but at least one thread wants a lock the other thread holds) then you need to first abandon all pre-conceptions of how the threads order locking. Assume nothing. You may want to remove all comments from the code you're looking at, as those comments may cause you to believe something that doesn't hold true. It's hard to emphasize this enough: assume nothing. After that, determine what locks get held while a thread attempts to lock something else. If you can, ensure that a thread unlocks in reverse order from locking. Even better, ensure that a thread holds only one lock at a time. Painstakingly work through a thread's execution, and examine all locking events. At each lock, determine whether a thread holds other locks, and if so, under what circumstances another thread, doing a similar execution path, can get to the locking event under consideration. It's certainly possible you will not find the problem before you run out of time or money.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/40216", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/6415/" ] }
40,297
I haven’t clearly understood the concept of side effect. What is side effect in programming? Is it programming language dependent? Is there such a thing as external and internal side effects? Please give some example of causes that create side effects.
A side effect refers simply to the modification of some kind of state - for instance: Changing the value of a variable; Writing some data to disk; Enabling or disabling a button in the User Interface. Contrary to what some people seem to be saying: A side effect does not have to be hidden or unexpected (it can be, but that has nothing to do with the definition as it applies to computer science); A side effect has nothing to do with idempotency. An idempotent function can have side effects, and a non-idempotent function may have no side effects (such as getting the current system date and time). It's really very simple. Side effect = changing something somewhere. P.S. As commenter benjol points out, several people may be conflating the definition of a side effect with the definition of a pure function , which is a function that is (a) idempotent and (b) has no side-effects. One does not imply the other in general computer science, but functional programming languages will typically tend to enforce both constraints.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/40297", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/7083/" ] }
40,373
There has been a lot of discussion lately about the problems with using (and overusing) Singletons. I've been one of those people earlier in my career too. I can see what the problem is now, and yet, there are still many cases where I can't see a nice alternative - and not many of the anti-Singleton discussions really provide one. Here is a real example from a major recent project I was involved in: The application was a thick client with many separate screens and components which uses huge amounts of data from a server state which isn't updated too often. This data was basically cached in a Singleton "manager" object - the dreaded "global state". The idea was to have this one place in the app which keeps the data stored and synced, and then any new screens that are opened can just query most of what they need from there, without making repetitive requests for various supporting data from the server. Constantly requesting to the server would take too much bandwidth - and I'm talking thousands of dollars extra Internet bills per week, so that was unacceptable. Is there any other approach that could be appropriate here than basically having this kind of global data manager cache object? This object doesn't officially have to be a "Singleton" of course, but it does conceptually make sense to be one. What is a nice clean alternative here?
It's important to distinguish here between single instances and the Singleton design pattern . Single instances are simply a reality. Most apps are only designed to work with one configuration at a time, one UI at a time, one file system at a time, and so on. If there's a lot of state or data to be maintained, then certainly you would want to have just one instance and keep it alive as long as possible. The Singleton design pattern is a very specific type of single instance, specifically one that is: Accessible via a global, static instance field; Created either on program initialization or upon first access; No public constructor (cannot instantiate directly); Never explicitly freed (implicitly freed on program termination). It is because of this specific design choice that the pattern introduces several potential long-term problems: Inability to use abstract or interface classes; Inability to subclass; High coupling across the application (difficult to modify); Difficult to test (can't fake/mock in unit tests); Difficult to parallelize in the case of mutable state (requires extensive locking); and so on. None of these symptoms are actually endemic to single instances, just the Singleton pattern. What can you do instead? Simply don't use the Singleton pattern. Quoting from the question: The idea was to have this one place in the app which keeps the data stored and synced, and then any new screens that are opened can just query most of what they need from there, without making repetitive requests for various supporting data from the server. Constantly requesting to the server would take too much bandwidth - and I'm talking thousands of dollars extra Internet bills per week, so that was unacceptable. This concept has a name, as you sort of hint at but sound uncertain of. It's called a cache . If you want to get fancy you can call it an "offline cache" or just an offline copy of remote data. A cache does not need to be a singleton. It may need to be a single instance if you want to avoid fetching the same data for multiple cache instances; but that does not mean you actually have to expose everything to everyone . The first thing I'd do is separate out the different functional areas of the cache into separate interfaces. For example, let's say you were making the world's worst YouTube clone based on Microsoft Access: MSAccessCache ▲ | +-----------------+-----------------+ | | | IMediaCache IProfileCache IPageCache | | | | | | VideoPage MyAccountPage MostPopularPage Here you have several interfaces describing the specific types of data a particular class might need access to - media, user profiles, and static pages (like the front page). All of that is implemented by one mega-cache, but you design your individual classes to accept the interfaces instead, so they don't care what kind of an instance they have. You initialize the physical instance once, when your program starts, and then just start passing around the instances (cast to a particular interface type) via constructors and public properties. This is called Dependency Injection , by the way; you don't need to use Spring or any special IoC container, just so long as your general class design accepts its dependencies from the caller instead of instantiating them on its own or referencing global state . Why should you use the interface-based design? Three reasons: It makes the code easier to read; you can clearly understand from the interfaces exactly what data the dependent classes depend on. If and when you realize that Microsoft Access wasn't the best choice for a data back-end, you can replace it with something better - let's say SQL Server. If and when you realize that SQL Server isn't the best choice for media specifically , you can break up your implementation without affecting any other part of the system . That is where the real power of abstraction comes in. If you want to take it one step further then you can use an IoC container (DI framework) like Spring (Java) or Unity (.NET). Almost every DI framework will do its own lifetime management and specifically allow you to define a particular service as a single instance (often calling it "singleton", but that's only for familiarity). Basically these frameworks save you most of the monkey work of manually passing around instances, but they are not strictly necessary. You do not need any special tools in order to implement this design. For the sake of completeness, I should point out that the design above is really not ideal either. When you are dealing with a cache (as you are), you should actually have an entirely separate layer . In other words, a design like this one: +--IMediaRepository | Cache (Generic)---------------+--IProfileRepository ▲ | | +--IPageRepository +-----------------+-----------------+ | | | IMediaCache IProfileCache IPageCache | | | | | | VideoPage MyAccountPage MostPopularPage The benefit of this is that you never even need to break up your Cache instance if you decide to refactor; you can change how Media is stored simply by feeding it an alternate implementation of IMediaRepository . If you think about how this fits together, you will see that it still only ever creates one physical instance of a cache, so you never need to be fetching the same data twice. None of this is to say that every single piece of software in the world needs to be architected to these exacting standards of high cohesion and loose coupling; it depends on the size and scope of the project, your team, your budget, deadlines, etc. But if you're asking what the best design is (to use in place of a singleton), then this is it. P.S. As others have stated, it's probably not the best idea for the dependent classes to be aware that they are using a cache - that is an implementation detail they simply should never care about. That being said, the overall architecture would still look very similar to what's pictured above, you just wouldn't refer to the individual interfaces as Caches . Instead you'd name them Services or something similar.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/40373", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/5064/" ] }
40,394
Do you have any particular style of organizing projects? For example, currently I'm creating a project for a couple of schools here in Bolivia, this is how I organized it: TutoMentor (Solution) TutoMentor.UI (Winforms project) TutoMentor.Data (Class library project) How exactly do you organize your project? Do you have an example of something you organized and are proud of? Can you share a screenshot of the Solution pane? In the UI area of my application, I'm having trouble deciding on a good schema to organize different forms and where they belong. Edit: What about organizing different forms in the .UI project? Where/how should I group different form? Putting them all in root level of the project is a bad idea.
When designing a project and laying out the architecture I start from two directions. First I look at the project being designed and determine what buisness problems need to be solved. I look at the people who will be using it and start with a crude UI design. At this point I am ignoring the data and just looking at what the users are asking for and who will be using it. Once I have a basic understanding of what they are asking for I determine what the core data is that they will be manipulating and begin a basic database layout for that data. Then I start to ask questions to define the business rules that surround the data. By starting from both ends independently I am able to lay out a project in a way that melds the two ends together. I always try to keep the designs separate for as long as possible before melding them together, but keep in mind the requirements of each as I move forward. Once I have a good solid understanding of each end of the problem I begin to lay out the structure of the project that will be created to solve the problem. Once the basic layout of the project solution is created I look at the functionality of the project and set up a base set of namespaces that are used depending on the type of work being done. This may be things like Account, Shopping Cart, Surveys, etc. Here is the basic solution layout that I always start with. As the projects get better defined I refine it to meet the specific needs of each project. Some areas may be merged with others and I may add a few special ones as needed. SolutionName .ProjectNameDocuments For large projects there are certain documents that need to be kept with it. For this I actually create a separate project or folder within the solution to hold them. .ProjectNameUnitTest Unit testing always depends on the project - sometimes it is just really basic to catch edge cases and sometimes it is set up for full code coverage. I have recently added graphical unit testing to the arsenal. .ProjectNameInstaller Some projects have specific installation requirements that need to be handled at a project level. .ProjectNameClassLibrary If there is a need for web services, APIs, DLLs or such. .ProjectNameScripts (**Added 2/29/2012**) I am adding this because I just found a need for one in my current project. This project holds the following types of scripts: SQL (Tables, procs, views), SQL Data update scripts, VBScripts, etc. .ProjectName .DataRepository Contains base data classes and database communication. Sometimes also hold a directory that contains any SQL procs or other specific code. .DataClasses Contains the base classes, structs, and enums that are used in the project. These may be related to but not necessarily be connected to the ones in the data repository. .Services Performs all CRUD actions with the Data, done in a way that the repository can be changed out with no need to rewrite any higher level code. .Business Performs any data calculations or business level data validation, does most interaction with the Service layer. .Helpers I always create a code module that contains helper classes. These may be extensions on system items, standard validation tools, regular expressions or custom-built items. .UserInterface The user interface is built to display and manipulate the data. UI Forms always get organized by functional unit namespace with additional folders for shared forms and custom controls.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/40394", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/-1/" ] }
40,454
Every now and then I see "closures" being mentioned, and I tried looking it up but Wiki doesn't give an explanation that I understand. Could someone help me out here?
(Disclaimer: this is a basic explanation; as far as the definition goes, I'm simplifying a little bit) The most simple way to think of a closure is a function that can be stored as a variable (referred to as a "first-class function"), that has a special ability to access other variables local to the scope it was created in. Example (JavaScript): var setKeyPress = function(callback) { document.onkeypress = callback; }; var initialize = function() { var black = false; document.onclick = function() { black = !black; document.body.style.backgroundColor = black ? "#000000" : "transparent"; } var displayValOfBlack = function() { alert(black); } setKeyPress(displayValOfBlack); }; initialize(); The functions 1 assigned to document.onclick and displayValOfBlack are closures. You can see that they both reference the boolean variable black , but that variable is assigned outside the function. Because black is local to the scope where the function was defined , the pointer to this variable is preserved. If you put this in an HTML page: Click to change to black Hit [enter] to see "true" Click again, changes back to white Hit [enter] to see "false" This demonstrates that both have access to the same black , and can be used to store state without any wrapper object. The call to setKeyPress is to demonstrate how a function can be passed just like any variable. The scope preserved in the closure is still the one where the function was defined. Closures are commonly used as event handlers, especially in JavaScript and ActionScript. Good use of closures will help you implicitly bind variables to event handlers without having to create an object wrapper. However, careless use will lead to memory leaks (such as when an unused but preserved event handler is the only thing to hold on to large objects in memory, especially DOM objects, preventing garbage collection). 1: Actually, all functions in JavaScript are closures.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/40454", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/2210/" ] }
40,508
I always come across people who like to bang on for ages over the smallest "technical things". Don't get me wrong, I'm a geek programmer who loves what I do, but you know the type of conversation. Mac is so much better than Windows Don't use a For Each loop, use a While loop Don't buy an Intel based PC, get an AMD based one. We should use one IoC container over another. All these "things" have valid pro's and con's for both sides, and you'll never get a "correct" answer, and the person will never concede the point. (of course there will be some where there is an answer, maybe :). My question (I'm getting there!!) is: In a software team, how do you cut through these long discussions without inhibiting innovation, so that a decision can be made and you can get on to solving the real business problems.
Problem 1. Some people don't like to lose. If they're not calling the shots, they're going to debate until they call the shots through attrition. Problem 2. Nothing's really at stake, so debating is tolerated. Nothing's at stake? Yes. Most of the decisions have almost zero dollar impact. The fact that it comes down to "bang on for ages" means that both choices are effectively identical. What to do? Realize that nothing's at stake. Realize that in 2 or 3 years, the whole subject will be reopened because something outside the organization changed. Toss a coin. Seriously. Just pick something and move on. Some folks will see the silliness in debating. Some folks will then debate the nature of the coin being tossed. If folks can't be satisfied with a coin toss, they have ego problems and need to learn that (a) nothing's at stake and (b) the decision will be changed in a few years. If they can't figure out that nothing's at stake, they need to write out the dollar value of both sides of the argument. At some point, someone may see that more man-hours are being spent on analysis than the actual decision is worth. A coin toss produces equal value for a lower cost.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/40508", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/14363/" ] }
40,561
Can I develop a BSD-licensed software that extends an Apache-licensed software (and vice-versa)?
The short answer is that if you use open source software in your project, you must satisfy all the requirements laid out in that license. Neither BSD nor Apache 2.0 is a "viral" license, meaning AFAIK it makes no demands on the other source code you include it with. That also means that unlike GPL, you aren't releasing a "product" under a BSD or Apache license, but each file or section of code is licensed under the license that the author released it under. So you could have a project where one module was BSD and one module was Apache, and I see no reason why you couldn't distribute that, as long as you were explicit about it. Note that IANAL. Do you have the right to take Apache licensed code and re-license it as BSD? I highly doubt it. I believe Apache has a couple more restrictions on it than BSD. You'd have to keep the original copyright notice, of course, but I also believe you have to do a bit extra when releasing changes to it, and you couldn't (or at least shouldn't) remove those conditions by changing the license. You may include (extend) BSD licensed code in an Apache v2.0 licensed code base, including source code managed by the Apache Software Foundation . For the purposes of being a dependency to an Apache product, which licenses are considered to be similar in terms to the Apache License 2.0? Works under the following licenses may be included within Apache products: • BSD (without advertising clause). Including variants: ◦ BSD 2-clause ◦ BSD 3-clause Please consult your attorney for risk and compliance advise.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/40561", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/12038/" ] }
40,564
I'm trying to edit a JSP for a project and I'm getting a NullPointerException somewhere in the JSP when it's requested from my server. My web server (JBoss) is reporting the exception, but it's giving me a bogus line number. It's reporting that the exception happened on line 702, but my JSP is only 146 lines long, so I'm unable to identify which line is choking. What are some good techniques to debug errors in JSPs? I'm using IntelliJ 9 Ultimate as my IDE. Thanks
There are several options that can help you: This answer actually explains how to debug JSPs specifically: https://stackoverflow.com/questions/33739/jsp-debugging-in-intellij-idea . I haven't ever tried this so I followed the next three suggestions when I was using JSP, JBoss, and IntelliJ... Remember that JSPs are compiled to classes. When it says line 702, it means in the compiled class. If you have test.jsp , the class name is probably test_jsp , so open your JBoss work directory and search for test_jsp.java (once you find the right directory, you'll see that the directory structure matches your JSP directory structure). Whenever I had a JSP exception I could find the line quite easily and usually match it up to the corresponding line in the JSP. Breakpoints work just fine in Java classes called from a JSP. So, maybe you can move the Java script code that you have in your JSP into a class and debug from the entry point. In the future you can also make a habit of rearranging your logic so that the majority of it is in a class that is being called from the JSP, and the JSP is simple, straightforward and hopefully not throwing any exceptions. This is good practice anyway. Better yet, remove all (or virtually all) Java logic into Java classes, leaving JSPs for HTML and JSP tags. I know this isn't feasible right away, but again, it's a good idea long term to avoid problems like this.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/40564", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/14421/" ] }
40,590
Lots of programmers that I've met always says that "He is not a UI guy." The fact is that development nowadays, whether web, Windows, Linux, OSX, or any other type of development now comprises software with a good-looking UI. Why do so many developers seem to not like UI work?
I'm not a UI person either. Well, I do UI on my own projects, but at work I have nothing to do with it -- my work is in the guts of the app, not on the front end. Beyond that, I think it's more boredom than hate. Designing the UI is the hard and challenging part. Implementation is mostly grunt work. There's very little challenge or innovation in how one can implement a user interface, and there's only so many times one can put a checkbox on the screen before going slightly mental. And that's not even touching on spending hours aligning pixels "just so".
{ "source": [ "https://softwareengineering.stackexchange.com/questions/40590", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/14672/" ] }
40,614
edit: I should point out; My personal view was that I should be proactive. I know sometimes I have to bite my tongue, and I wanted to get the communities input(was this one of those times). I couldn't find a more appropriate place to ask it in the SO family of sites. Here is the scenario -- small org < 70 employees no qa department website viewed by thousands everyday. I am the sole website developer I have never had a single complaint that the site is broken in IE6 I've discovered our site has not worked in IE6 for years. The person I replaced who created it must have been "testing" it only on IE7. I fired up Virtual PC and with IE6, and our site is a complete mess. You can not select some menu items they are so garbled. It looks terrible. So again, Is it our job to proactively seek out bugs, or do we just fix what the customer requests.... Personally, I want to leverage this opportunity with my org to drop any expectation of IE6 support or compatibility.
Short answer, yes. A professional developer should be proactive. Long answer, it depends. Do you have any analytics set up on your site(s)? If so, you can use the browswer reports to see what percentage of your traffic is IE6 and use that to determine if the ROI is worth your (or the business's) time.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/40614", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/14699/" ] }
40,643
It seems that even that developer tools has become more solid and robust, writing good code has become a challenge. Even that tools are more powerful, quality of code haven’t got better. I come up with two important factors, there is less time and the projects are more complex. Because tools we use today are more powerful it is easier to write more complex code, but having no time to plan and without looking back decreases code quality and increases bugs and maintenance. It is not that we didn’t write complex code before. It is that we write more complex code. My question is the following: Considering we have more powerful language and tools. Why is writing good code more difficult? Do factors, time and complexity contribute to this? Are methodologies not practiced correctly? The type of project I consider is enterprise application with large complexity and business logic. The definition of “good code” is individual please don’t get stuck in the interpretation of “good code”.
As it was stated by DeMarco and Lister in Peopleware some 20ish years ago, the vast majority of failed software projects fail not due to technical challenges, but sociological problems . This hasn't changed in the past decades, no matter how much our tools have improved. Mismanagement, unrealistic expectations, failing to get the right people for the job, and/or not letting them do their job, consequently failing to keep them; workplaces and tools which are not suitable for SW development work; unhandled personal conflicts; politics ; these are just a few of the typical problems which may make a project doomed from the start. Why writing good code is harder? I am not quite convinced it is really harder to write good code now than it was decades ago. In fact, compared to machine code or assembly, everything we have now in the mainstream is way easier to handle. Just we may need to produce more of it. Is it only because of the mention factors, time and complexity? Yes, the achievable complexity has certainly increased (and continues to increase) as the power of our tools increases. In other words, we keep pushing the boundaries. Which to me translates so that it is equally hard to solve today's greatest challenges as it was 30 years ago to solve that day's greatest challenges. OTOH since the field has grown so enormously, there are way more "small" or "known" problems now than there was 30 years ago. These problems are technically (should) not (be) a challenge anymore, but... here enters the above maxim :-( Also the number of programmers have since grown enormously. And at least my personal perception is that the average level of experience and knowledge has declined, simply because there are far more juniors arriving continuously to the field than there are seniors who could educate them. Is it that methodologies are not practiced correctly? IMHO certainly not. DeMarco and Lister have some harsh words about big-M Methodologies. They say that no Methodology can make a project succeed - only the people in the team can. OTOH the small-m methodologies they praise are quite close to what we now know as "agile", which is spreading widely (IMHO for a good reason). Not to mention such good practices as unit testing and refactoring, which just 10 years ago weren't widely known, and nowadays even many graduates know these.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/40643", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/7083/" ] }
40,738
I love programming in languages that seem geared towards hardcore programmers. (My favorites are Python and D.) MATLAB is geared towards engineers and R is geared towards statisticians, and it seems like these languages were designed for people who aren't hardcore programmers and don't think like hardcore programmers. I always find them somewhat awkward to use, and to some extent I can't put my finger on why. Here are some issues I have managed to identify: (Both): The extreme emphasis on vectors and matrices to the extent that there are no true primitives. (Both): The difficulty of basic string manipulation. (Both): Lack of or awkwardness in support for basic data structures like hash tables and "real", i.e. type-parametric and nestable, arrays. (Both): They're really, really slow even by interpreted language standards, unless you bend over backwards to vectorize your code. (Both): They seem to not be designed to interact with the outside world. For example, both are fairly bulky programs that take a while to launch and seem to not be designed to make simple text filter programs easy to write. Furthermore, the lack of good string processing makes file I/O in anything but very standard forms near impossible. (Both): Object orientation seems to have a very bolted-on feel. Yes, you can do it, but it doesn't feel much more idiomatic than OO in C. (Both): No obvious, simple way to get a reference type. No pointers or class references. For example, I have no idea how you roll your own linked list in either of these languages. (MATLAB): You can't put multiple top level functions in a single file, encouraging very long functions and cut-and-paste coding. (MATLAB): Integers apparently don't exist as a first class type. (R): The basic builtin data structures seem way too high level and poorly documented, and never seem to do quite what I expect given my experience with similar but lower level data structures. (R): The documentation is spread all over the place and virtually impossible to browse or search. Even D, which is often knocked for bad documentation and is still fairly alpha-ish, is substantially better as far as I can tell. (R): At least as far as I'm aware, there's no good IDE for it. Again, even D, a fairly alpha-ish language with a small community, does better. In general, I also feel like MATLAB and R could be easily replaced by plain old libraries in more general-purpose languages, if sufficiently comprehensive libraries existed. This is especially true in newer general purpose languages that include lots of features for library writers. Why do R and MATLAB seem so weird to me? Are there any other major issues that you've noticed that may make these languages come off as strange to hardcore programmers? When their use is necessary, what are some good survival tips? Edit: I'm seeing one issue from some of the answers I've gotten. I have a strong personal preference, when I analyze data, to have one script that incorporates the whole pipeline. This implies that a general purpose language needs to be used. I hate having to write a script to "clean up" the data and spit it out, then another to read it back in a completely different environment, etc. I find the friction of using MATLAB/R for some of my work and a completely different language with a completely different address space and way of thinking for the rest to be a huge source of friction. Furthermore, I know there are glue layers that exist, but they always seem to be horribly complicated and a source of friction.
It's probably a bad idea to approach domain specific languages with the mindset required for programming at large is, or for programming general programs with general purpose languages. Being domain specific, they will likely require a steeper learning curve and an uncomfortable mind set in order to be most efficiently used. I consider writing code in Matlab equivalent to writing highly optimized, domain specific code (on par with, for example, writing efficient and clean OpenGL code). I've also seen them move more and more towards becoming useful as libraries to be used in other languages - see, for example, http://www.mathworks.com/matlabcentral/fileexchange/12987-integrating-matlab-with-c I would say, use the same process for these DSL as you would for any others: Carefully select the problems which you are solving using Matlab or R, to make sure that they are exactly the kinds of problems which they are best at solving. For example, use Matlab to manipulate your vectors, and not for the rest of your work, if you can avoid it Generally, mix/match the solution to restrict the portions that you program in Matlab or R to the exact subset of the problem which they are built to handle. Follow the mindset of a typical user in the domain that the languages are built for, when designing and building your solution - adapt a vector-mathematical attitude towards the world before starting to work on a Matlab program, for example; possibly write up your work on paper, using standard math notation, first Do the extra work required to build yourself a comfortable work environment, and obtain the tools required for doing the job, even if different from the standard for the DSL. If you're an emacs user, for example, consider using the matlab mode for emacs to do your work; make sure it works as well as the modes you've set up for other languages Be ready to switch out. Especially if you have to come back to the language often, make sure to build yourself a reliable ecosystem where the work you do in the DSL is contained to the domain specific work, only, and it's as easy as possible to switch to another language for the rest of your work. Remind yourself, more often than usual, to look for ways to do the non-DSL specific work in other systems
{ "source": [ "https://softwareengineering.stackexchange.com/questions/40738", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/1468/" ] }
40,845
In my opinion, when I looked at JavaScript, it looked like not my cup of tea. When I came across jQuery, I loved it. I sat and watched Nettuts+ 15 days of jQuery screencasts, 1 year later and now I'm fairly confident I wouldn't develop a website without including jQuery's library. I have never felt this has held me back but my question is, will this come back and bite me in the ass one day, the fact that I didn't have a solid JavaScript foundation before jumping feet first into one of its best (if not the best) frameworks? Did anyone else take this approach?
jQuery makes it easier and shorter to write JavaScript code. But jQuery does not replace JavaScript by an own language. If you use jQuery, you don't need to know some things like how to find an element based on its id or its name, or how to loop through the <li/> elements of an <ul/> list (since you can easily do it with jQuery: $('ul#ListNameHere li').each(function() { }); ). But even if you use jQuery, you still need to know how JavaScript works , and how to do things jQuery doesn't do for you . It means that you must know: the language itself (how to use arrays, what are closures, etc.) the non-jQuery things (calculations, for example). Last but not least, if you intend to write jQuery code on professional level, you should know JavaScript as well (including things that jQuery does for you), in order to write optimal code, not being stuck when there is a bug, etc.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/40845", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/8622/" ] }
41,045
There are no shortage of vague "Scheme vs Common Lisp" questions on both StackOverflow and on this site, so I want to make this one more focused. The question is for people who have coded in both languages: While coding in Scheme, what specific elements of your Common Lisp coding experience did you miss most? Or, inversely, while coding in Common Lisp, what did you miss from coding in Scheme? I don't necessarily mean just language features. The following are all valid things to miss, as far as the question is concerned: Specific libraries. Specific features of development environments like SLIME, DrRacket, etc. Features of particular implementations, like Gambit's ability to write blocks of C code directly into your Scheme source. And of course, language features. Examples of the sort of answers I'm hoping for: "I was trying to implement X in Common Lisp, and if I had Scheme's first-class continuations, I totally would've just done Y, but instead I had to do Z, which was more of a pain." "Scripting the build process in my Scheme project got increasingly painful as my source tree grew and I linked in more and more C libraries. For my next project, I moved back to Common Lisp." "I have a large existing C++ codebase, and for me, being able to embed C++ calls directly in my Gambit Scheme code was totally worth any shortcomings that Scheme may have vs Common Lisp, even including lack of SWIG support." So, I'm hoping for war stories, rather than general sentiments like "Scheme is a simpler language" etc.
My undergrad degree was in Cognitive Science and Artificial Intelligence. From that I had a one-course intro to Lisp. I thought the language was interesting (as in "elegant") but didn't really think much of it until I came across Greenspun's Tenth Rule much later: Any sufficiently complicated C or Fortran program contains an ad hoc, informally-specified, bug-ridden, slow implementation of half of Common Lisp. Greenspun's point was (in part) that many complex programs have built-in interpreters. Rather than building an interpreter into a language he suggested it might make more sense to use a language like Lisp that already has an interpreter (or compiler) built-in. At the time I had been working on a rather big app that performed user-defined calculations using a custom interpreter for a custom language. I decided to try re-writing its core in Lisp as a large-scale experiment. It took roughly six weeks. The original code was ~100,000 lines of Delphi (a Pascal variant). In Lisp that was reduced to ~10,000 lines. Even more surprising, though, was the fact that the Lisp engine was 3-6 times faster. And keep in mind that this was the work of a Lisp neophyte! That whole experience was quite an eye-opener for me; for the first time I saw the possibility of combining performance and expressiveness in a single language. Some time later when I started working on a web-based project I auditioned a number of languages. I included Lisp and Scheme in the mix. In the end I selected a Scheme implementation-- Chez Scheme . I've been very happy with the results. The web-based project is a high-performance "selection engine" . We use Scheme in a number of different ways, from processing data to querying data to page generation. In many spots we actually started off with a different language but ended up migrating to Scheme for reasons I'll describe briefly below. Now I can answer your question (at least in part). During the audition we looked at a variety of Lisp and Scheme implementations. On the Lisp side we looked at (I believe) Allegro CL, CMUCL, SBCL and LispWorks. On the Scheme side we looked at (I believe) Bigloo, Chicken, Chez, Gambit. (The language selection was a long time ago; that's why I'm a bit hazy. I can dig up some notes if it's important.) Right off the bat we were looking for a) native threads and b) Linux, Mac and Windows support. Those two conditions combined knocked everyone but (I think) Allegro and Chez out--so in order to continue the evaluation we had to loosen the multi-threading requirement. We put together a suite of small programs and used them for evaluation and testing. That revealed a number of issues. For example: some implementations had defects that prevented some tests from running to completion; some implementations couldn't compile code at run-time; some implementations couldn't easily integrate run-time compiled code with pre-compiled code; some implementations had garbage collectors which were clearly better (or clearly worse) than the others'; etc. For our needs only the three commercial implementations--Allegro, Chez and Lispworks--passed our primary tests. Of the three only Chez passed all tests with flying colors. At the time I think Lispworks didn't have native threads on any platform (I think they do now) and I think Allegro only had native threads on some platforms. Furthermore, Allegro had a "call us" run-time licensing fee which I didn't like very much. I believe Lispworks had no run-time fee and Chez had a straightforward (and very reasonable) arrangement (and it only kicked in if you used the compiler at run-time). Having produced somewhat significant chunks of code in both Lisp and Scheme here are some compare and contrast points: The Lisp environments are far more mature. You get a lot more bang for the buck. (Having said that, more code also equates to more bugs.) The Lisp environments are far more difficult to learn. You need a lot more time to become proficient; Common Lisp is a huge language--and that's before you get to the libraries that the commercial implementations add on top of it. (Having said that, Scheme's syntax-case is far more subtle and complicated than any one thing in Lisp.) The Lisp environments can be somewhat more difficult to produce binaries in. You need to "shake" your image to remove unneeded bits, and if you don't exercise your program correctly during that process you could end up with run-time errors later on. By contrast, with Chez we compile a top-level file that includes all of the other files it needs and we're done. I said before that we ended up using Scheme in a number of places we didn't originally intend to. Why? I can think of three reasons off the top of my head. First, we learned to trust Chez (and its developer, Cadence). We asked a lot from the tool, and it consistently delivered. For example, Chez has historically had a trivially small number of defects, and its memory manager has been very, very good. Second, we learned to love the performance we got from Chez. We were using something that felt like a scripting language--and we were getting native-code speed from it. For some things that didn't matter--but it never hurt, and sometimes it helped an awful lot. Third, we learned to love the abstraction Scheme could provide. I don't just mean macros, by the way; I mean things like closures, lambdas, tail-calls, etc. Once you start thinking in those terms other languages seem rather limited by comparison. Is Scheme perfect? No; it's a trade-off. First, it allows individual developers to be more effective--but it's more difficult for developers to grok each other's code because the signposts that most languages have (e.g., for loops) are missing in Scheme (e.g., there are a million ways to do a for loop). Second, there's a much smaller pool of developers to talk to, hire from, borrow from, etc. To sum it up, I think I'd say: Lisp and Scheme offer some capabilities not widely available anywhere else. That capability is a trade-off, so it had better be one that makes sense in your particular case. In our case the determining factors between whether to go with Lisp or Scheme had more to do with very fundamental features (platform support, platform threads, run-time compilation, run-time licensing) than they did with language or library features. Again, in our case that too was a trade-off: with Chez we got the core features we wanted but we lost the extensive libraries the commercial Lisp environments had. Also, just to reiterate: we looked at the various Lisps and Schemes a long time ago; they've all evolved and improved since.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/41045", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/11303/" ] }
41,050
I had a user ask me this question. We know that cars break down, but that's because of something physical (unless software is involved!). I tried to answer that software is a much younger industry, but the user countered with "didn't the automobile industry become much more stable than and reliable with less people?". I also tried to answer that software is more complex, but the user countered that there are many thousands of parts that make up a car. People that design and build cars generally just know their component(s) very well, but they still all end up working together as an end result. So, why isn't software as reliable as a car?
The premise of your question is simply incorrect: Software isn't "less reliable" than a car. There are billions upon billions of devices out there that run embedded software 24x7, for years on end, with no problem. Heck, some of it are in cars, and control/monitor the engine. So, how can software be less reliable than a car, if cars themselves rely on software?
{ "source": [ "https://softwareengineering.stackexchange.com/questions/41050", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/5941/" ] }
41,128
Note: This question is not about the "obnoxious BSD advertising clause" . The New BSD license does not contain that clause, and is compatible with the GPL. I'm trying to pick between the New BSD license and the MIT license for my own projects. They are essentially identical , except the BSD license contains the following clause: Neither the name of the <organization> nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. Why would anyone want to use this clause? What's wrong with gaining some notoriety if someone makes a well-known piece of software using your code? Also, wouldn't dictating what users can and cannot do with your given name fall outside the domain of intellectual property?
What's wrong with gaining some notoriety if someone makes a well-known piece of software using your code? (The issue is not with someone using your code. The issue is with someone using your name or your product's name as an endorsement for their code or actions ... and giving you or your code a bad reputation as a result.) I can think of a number things that could be wrong with that kind of notoriety: it could reduce your options in getting employment it could drive away current or potential sponsors (for an open source project) it could reduce your chances of getting further research funding (for an academic), it could deter paying customers (for a company) it could attract unwarranted attention from law enforcement it could attract opportunistic or vindictive lawsuits it could make you a target for a social media hate storm. Also, wouldn't dictating what users can and cannot do with your given name fall outside the domain of intellectual property? The "domain of intellectual property" is not a concept that has any significance to the enforceability of the terms of a license. What matters is whether people who want to use the licensed material are prepared to accept the license conditions that you have set. As the owner of the IP, you are entitled to place any conditions on its use that you want to * . Other people can then choose to use the material subject to the conditions, or not use it at all. * - Actually, there probably are limits on what conditions you can set. A condition requiring someone to perform an illegal act is probably illegal, and definitely unenforceable. Also, legal but "unconscionable" conditions are likely to fail a challenge in a lawsuit. IANAL - talk to one if you need legal advice.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/41128", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/3650/" ] }
41,248
My boss has always told me that a good programmer should be able to ensure that the code he or she changes is reliable, correct, and thoroughly self-verified; that you should completely understand all the results and impacts your changes will cause. I have tried my best to be this kind of programmer—by testing again and again—but bugs are still there. How can I be a zero-bug programmer and know what every character of my code will cause and affect?
Don't code at all. That's the only way you can be a zero-bug programmer. Bugs are unavoidable because programmers are human, all we can do is try our best to prevent them, react quickly when a bug occurs, learn from our mistakes and stay up to date.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/41248", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/4841/" ] }
41,409
Test-driven development (TDD) is big these days. I often see it recommended as a solution for a wide range of problems here in Programmers SE and other venues. I wonder why it works. From an engineering point of view, it puzzles me for two reasons: The "write test + refactor till pass" approach looks incredibly anti-engineering. If civil engineers used that approach for bridge construction, or car designers for their cars, for example, they would be reshaping their bridges or cars at very high cost, and the result would be a patched-up mess with no well thought-out architecture. The "refactor till pass" guideline is often taken as a mandate to forget architectural design and do whatever is necessary to comply with the test; in other words, the test, rather than the user, sets the requirement. In this situation, how can we guarantee good "ilities" in the outcomes, i.e. a final result that is not only correct but also extensible, robust, easy to use, reliable, safe, secure, etc.? This is what architecture usually does. Testing cannot guarantee that a system works; it can only show that it doesn't. In other words, testing may show you that a system contains defects if it fails a test, but a system that passes all tests is not safer than a system that fails them. Test coverage, test quality and other factors are crucial here. The false safe feelings that an "all green" outcomes produces to many people has been reported in civil and aerospace industries as extremely dangerous, because it may be interepreted as "the system is fine", when it really means "the system is as good as our testing strategy". Often, the testing strategy is not checked. Or, who tests the tests? In summary, I am more concerned about the "driven" bit in TDD than about the "test" bit. Testing is perfectly OK; what I don't get is driving the design by doing it. I would like to see answers containing reasons why TDD in software engineering is a good practice, and why the issues that I have explained above are not relevant (or not relevant enough) in the case of software. Thank you.
I think there is one misconception here. In software design, the design is very close to the product. In civil engineering, architecture, the design is decoupled from the actual product: there are blueprints that hold the design, that are then materialized into the finished product, and those are seperated by huge amounts of time and effort. TDD is testing the design. But every car design and building design is also tested. Construction techniques are first calculated, then tested in smaller scale, then tested in larger scale, before put out in a real building. When they invented H-beams and the load for example, rest asured that this has been tried and tried again before it they actually build the first bridge with it. Designs of cars are also tested, by designing prototypes, and yes, certainly by adjusting things that are not exactly right, until it lives up to the expectations. Part of this process though is slower, because as you said, you can't mess around much with the product. But every redesign of a car draws on experiences learned from former ones, and every building has about a thousend years of fundamentals behind it about the importance of space, light, insulation, strength, etc. Details are changed and improved, both in the buildings and in redesigns for newer ones. Also, parts are tested. Perhaps not exactly in the same style as software, but mechanical parts (wheels, igniters, cables) are usually measured and put under stress to know the sizes are correct, no abnormalities are to be seen, etc. They might be xrayed or laser-measured, they tap bricks to spot broken ones, they might be actually tested in some configuration or other, or they draw a limited representation of a large group to really put it to the test. Those are all things you can put in place with TDD. And indeed, testing is no guarantee. Programs crash, cars break down, and buildings start doing funny things when the wind blows. But... 'safety' is not a boolean question. Even when you can't ever include everything, being able to cover - say - 99% of the eventualities is better than covering only 50%. Not testing and then finding out the steel hasn't settled well and is brittle and breaks at the first smack of a hammer when you just put up your main structure is a plain waste of money. That there are other concerns that might still hurt the building do not make it any less stupid to allow an easily preventable flaw bring down your design. As to the practice of TDD, that is a matter of balancing. The cost of doing it one way (for example, not testing, and then picking up the pieces later), versus the cost of doing it another way. It is always a balance. But do not think that other design processes do not have testing and TDD in place.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/41409", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/5021/" ] }
41,435
I heard Google has a giant private (internal) repository of all of their code and their employees have access to it so that when they are developing things they don't have to reinvent the wheel. I'd like to know more about it! Is there anyone here from Google that can describe it in a bit more detail, or do you know a bit more about it? I'm interested in knowing mainly about how it's organized and how they can make it easy for an employee to find something in such a giant codebase as it must be.
Here is a video explaining how it is organized: Development at the Speed and Scale of Google Ashish Kumar presents how Google manages to keep the source code of all its projects, over 2000, in a single code trunk containing hundreds of millions of code lines, with more than 5,000 developers accessing the same repository.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/41435", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/1225/" ] }
41,473
Like most people, I think of myself as being a bit above average in my field. I get paid well, I've gotten promotions, and I've never had a real problem getting good references or getting a job. But I've been around enough to notice that many of the worst programmers I've worked with thought they were some of the best. Bad programmers who are surrounded by other bad programmers seem to be the most self-deluded. I'm certainly not perfect. I do make mistakes. I do miss deadlines. But I think I make about the same number of bonehead moves that "other good programmers" do. The problem is that I define "other good programmers" to mean "people who are like me." So, I wonder, is there any way a programmer can make some sort of reasonable self-evaluation? How do we know whether we are good or bad at our jobs? Or, if terms like good and bad are too ill-defined, how can programmers honestly identify their own strengths and weaknesses, so that they can take advantage of the former and work to improve the latter?
A good programmer understands that that they have to continue to learn and grow. They strive to do their best at every effort, admit to failures and learn from them. They are extraordinarily communicative. Not only are they able to explain complex technical terms to a layperson, but they go out of their way to act as devil's advocate to their own idea to make sure they're giving the best options to their client. The best programmers know and accept that there is more than one way to do things, that not every problem is a nail, and that because there is always a better way to do something than how they were planning on they constantly seek to learn new techniques, technologies, and understanding. A good programmer loves to program, and would do so in their spare time even if they already spend 80+ hours a week programming. A good programmer knows that she/he is not a great programmer. Truly great programmers do not exist, there are only those who claim to be great, and those who know they are not great.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/41473", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/778/" ] }
41,740
"Abstract class" and "interface" are similar concepts, with interface being the more abstract of the two. One differentiating factor is that abstract classes provide method implementations for derived classes when needed. In C#, however, this differentiating factor has been reduced by the recent introduction of extension methods, which enable implementations to be provided for interface methods. Another differentiating factor is that a class can inherit only one abstract class (i.e., there is no multiple inheritance), but it can implement multiple interfaces. This makes interfaces less restrictive and more flexible. So, in C#, when should we use abstract classes instead of interfaces with extension methods? A notable example of the interface + extension method model is LINQ, where query functionality is provided for any type that implements IEnumerable via a multitude of extension methods.
When you need a part of the class to be implemented. The best example I've used is the template method pattern. public abstract class SomethingDoer { public void Do() { this.DoThis(); this.DoThat(); } protected abstract void DoThis(); protected abstract void DoThat(); } Thus, you can define the steps that will be taken when Do() is called, without knowing the specifics of how they will be implemented. Deriving classes must implement the abstract methods, but not the Do() method. Extensions methods don't necessarily satisfy the "must be a part of the class" part of the equation. Additionally, iirc, extension methods cannot (appear to) be anything but public in scope. edit The question is more interesting than I had originally given credit for. Upon further examination, Jon Skeet answered a question like this on SO in favour of using interfaces + extension methods. Also, a potential downside is using reflection against an object hierarchy designed in this way. Personally, I am having trouble seeing the benefit of altering a currently common practice, but also see few to no downsides in doing it. It should be noted that it is possible to program this way in many languages via Utility classes. Extensions just provide the syntactic sugar to make the methods look like they belong to the class.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/41740", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/963/" ] }
41,773
I’m asking this question regarding problems I have experienced during TDD projects. I have noticed the following challenges when creating unit tests. Generating and maintaining mock data It’s hard and unrealistic to maintain large mock data. It’s is even harder when database structure undergoes changes. Testing GUI Even with MVVM and ability to test GUI, it takes a lot of code to reproduce the GUI scenario. Testing the business I have experience that TDD works well if you limit it to simple business logic. However complex business logic is hard to test since the number of combinations of tests (test space) is very large. Contradiction in requirements In reality it’s hard to capture all requirements under analysis and design. Many times one note requirements lead to contradiction because the project is complex. The contradiction is found late under implementation phase. TDD requires that requirements are 100% correct. In such cases one could expect that conflicting requirements would be captured during creating of tests. But the problem is that this isn’t the case in complex scenarios. I have read this question: Why does TDD work? Does TDD really work for complex enterprise projects, or is it practically limit to project type?
It’s hard and unrealistic to maintain large mock data. It’s is even harder when database structure undergoes changes. False. Unit testing doesn't require "large" mock data. It requires enough mock data to test the scenarios and nothing more. Also, the truly lazy programmers ask the subject matter experts to create simple spreadsheets of the various test cases. Just a simple spreadsheet. Then the lazy programmer writes a simple script to transform the spreadsheet rows into unit test cases. It's pretty simple, really. When the product evolves, the spreadsheets of test cases are updated and new unit tests generated. Do it all the time. It really works. Even with MVVM and ability to test GUI, it’s takes a lot of code to reproduce the GUI scenario. What? "Reproduce"? The point of TDD is to Design things for Testability (Test Drive Development). If the GUI is that complex, then it has to be redesigned to be simpler and more testable. Simpler also means faster, more maintainable and more flexible. But mostly simpler will mean more testable. I have experience that TDD works well if you limit it to simple business logic. However complex business logic is hard to test since the number of combination of test (test space) is very large. That can be true. However, asking the subject matter experts to provide the core test cases in a simple form (like a spreadsheet) really helps. The spreadsheets can become rather large. But that's okay, since I used a simple Python script to turn the spreadsheets into test cases. And. I did have to write some test cases manually because the spreadsheets were incomplete. However. When the users reported "bugs", I simply asked which test case in the spreadsheet was wrong. At that moment, the subject matter experts would either correct the spreadsheet or they would add examples to explain what was supposed to happen. The bug reports can -- in many cases -- be clearly defined as a test case problem. Indeed, from my experience, defining the bug as a broken test case makes the discussion much, much simpler. Rather than listen to experts try to explain a super-complex business process, the experts have to produce concrete examples of the process. TDD requires that requirements are 100% correct. In such cases one could expect that conflicting requirements would be captured during creating of tests. But the problem is that this isn’t the case in complex scenario. Not using TDD absolutely mandates that the requirements be 100% correct. Some claim that TDD can tolerate incomplete and changing requirements, where a non-TDD approach can't work with incomplete requirements. If you don't use TDD, the contradiction is found late under implementation phase. If you use TDD the contradiction is found earlier when the code passes some tests and fails other tests. Indeed, TDD gives you proof of a contradiction earlier in the process, long before implementation (and arguments during user acceptance testing). You have code which passes some tests and fails others. You look at only those tests and you find the contradiction. It works out really, really well in practice because now the users have to argue about the contradiction and produce consistent, concrete examples of the desired behavior.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/41773", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/7083/" ] }
41,883
It seems like most of common web browsers (Firefox, Chrome, Safari) are developed using C++. Whys is that so?
Another way to ask the question is what kind of support does a browser need? The short list is: Support for parsing (needed to make sense of [X]HTML, CSS, and [ECMA/Java]Script) Tree walking/interpreting features (part of parsing and building UI) Support for accelerated graphics Fast networking For the more advanced browsers: control over processes and isolating memory between pages Must work on all supported platforms Most languages have some sort of parsing support. You have parser generators for C, C++, C#, Java, etc. However, C and C++ have quite a few years head start on the rest of the alternatives so the algorithms and implementations are more mature. Accessing accelerated graphics in Java is a no go, unless you have some native extensions to make it work. WPF on C# provides access to accelerated graphics, but it is too new to have a serious browser built with the technology. Networking is actually the least of the reasons to choose C++ over Java or C#. The reason is that communication is many times slower than the rest of the processing that goes on to display the page. The raw speed of the wire is the limiting factor. Both Java and C# have non-blocking IO support, as does C++. So there really is no clear winner in this area. Why not Java? Have you ever tried to build a UI with Java? It feels cumbersome and slow compared to anything else out there, because it is. No accelerated graphics is also a big negative here. Java's sandboxing is really good, and can help improve the security of a browser if it is used correctly, but it is a pain to configure and make work. Not to mention the graphics format support lags behind most modern browsers. Why not C#? If your only target is Windows, C# might actually make a good representation. The problem comes when you want to support anything else. Mono hasn't caught up enough to be considered cross platform enough for this task--particularly with accelerated graphics support and WPF. Who knows how long that will take to change. Why not C? There's a C compiler for just about every platform out there (including embedded devices). However, there's a lot that C does not do for you that you will have to be extra vigilant about. You have access to all the lowest levels of the APIs, but the majority of C developers don't do GUIs. Even the C GUI libraries are written in an object oriented manner. As soon as you start talking UI, an object oriented language starts making better sense. Why not Objective C? If your only target is Apple, it makes a lot of sense. However, most developers don't know Objective-C, and the only reason to learn it is to work on NeXT or Apple boxes. Sure you can use any C library with Objective-C, and there are compilers for many platforms, but finding people to work on it will be a touch more difficult. Who knows? Maybe Apple can turn this perceived deficiency around. Why C++? There's a C++ compiler for just about every platform out there. Almost every GUI library has a C++ interface, sometimes it's better and sometimes it's just different. For example, Microsoft's ATL is a lot better than win32 C function calls or even the MFC library. There's C++ wrappers for GTK on Unix, and I'd be surprised if someone didn't have a C++ wrapper around Apple's Objective-C GUI library. Process management is easier within C++ than Java or C# (those details are abstracted away for you). It's perceived speed comes more from hardware acceleration than it does raw performance. C++ does take care of more things for you than raw C (such as bounded strings), but still gives you freedom to tweak things. Not to mention a number of libraries needed to render web pages are also written in C or C++. For the time being, C++ does edge out the alternatives.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/41883", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/14491/" ] }
41,941
So you've heard it many times from those who do not truly understand the values of testing. Just to start things out, I'm a follower of Agile and Testing... I recently had a discussion about performing TDD on a product re-write where the current team does not practice unit testing on any level, and probably have never heard of the dependency injection technique or test patterns/design etc (we won't even get on to clean code). Now, I am fully responsible for the rewrite of this product and I'm told that attempting it in the fashion of TDD, will merely make it a maintenance nightmare and impossible for the team maintain. Furthermore, as it's a front-end application (not web-based), adding tests is pointless, as the business drive changes (by changes they mean improvements of course), the tests will become out of date, other developers who come on to the project in the future will not maintain them and become more of a burden for them to fix etc. I can understand that TDD in a team that does not currently hold any testing experience doesn't sound good, but my argument in this case is that I can teach my practice to those around me, but further more, I know that TDD makes BETTER software. Even if I was to produce the software using TDD, and throw all the tests away on handing it over to a maintenance team, it surely would be a better approach than not using TDD at all from the start? I've been shot down as I've mentioned doing TDD on most projects for a team that have never heard of it. The thought of "interfaces" and strange looking DI constructors scares them off... Can anyone please help me in what is normally a very short conversation of trying to sell TDD and my approach to people? I usually have a very short window of argument before falling at the knees to the company/team.
attempting it in the fashion of TDD, will merely make it a maintenance nightmare and impossible for the team maintain. You can't win that argument. They're making this up. Sadly, you have no real facts, either. Any example you provide can be disputed. The only way to make this point is to have code which is lower cost to maintain. Furthermore, as it's a front-end application (not web-based), adding tests is pointless, Everyone says this. It may be partially true, also. If the application is reasonably well designed, the front-end does very little. If the application is poorly designed, however, the front-end does too much and is difficult to test. This is a design problem, not a testing problem. as the business drive changes (by changes they mean improvements of course), the tests will become out of date, other developers who come on to the project in the future will not maintain them and become more of a burden for them to fix etc. This is the same argument as above. You can't win the argument. So don't argue. "I am fully responsible for the rewrite of this product" In that case, Add tests anyway. But add tests as you go, incrementally. Don't spend a long time getting tests written first. Convert a little. Test a little. Convert a little more. Test a little more. Use those tests until someone figures out that testing is working and asks why things go so well. I had the same argument on a rewrite (from C++ to Java) and I simply used the tests even though they told me not to. I was developing very quickly. I asked for concrete examples of correct results, which they sent in spreadsheets. I turned the spreadsheets into unittest.TestCase (without telling them) and uses these to test. When were in user acceptance testing -- and mistakes were found -- I just asked for the spreadsheets with the examples to be reviewed, corrected and expanded to cover the problems found during acceptance test. I turned the corrected spreadsheets into unittest.TestCase (without telling them) and uses these to test. No one needs to know in detail why you are successful. Just be successful.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/41941", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/15177/" ] }
41,990
When I first started to learn how to program, real programmers could write assembly in their sleep. Any serious schooling in computer science would include a hefty bit of training and practice in programming using assembly. That has since changed, to the point where I see Computer Science degrees with assembly, if included at all, is relegated to one assignment, and one chapter, for a total of two weeks' work out of 4 years' schooling. C/C++ programming seems to have followed a similar path. I'm no longer surprised to interview university graduates who have not spent more than two weeks programming in C++, and have only read of C in a book somewhere. While the most serious CS degrees still seem to include significant time learning and using one or both of the languages, the trend is clearly towards less enforced C/C++ in school. It's clearly possible to make a career producing good work without ever reading or writing a single line of C or C++ code. Given all of that, is learning the two languages worth the effort? Are they at all required to excel? (beyond the obvious, non-language specific advice, such as "a good selection of languages is probably important for a comprehensive education", and "it's probably a good idea to keep trying out and learning new languages throughout a programmers' career, just to stretch the gray cells")
Joel Spolsky (yeah, that Joel) argued a while back that real tough programmers know how to use harder languages (like C, C++ and Lisp) and their constructs (like pointers and functional features), and that higher-level languages were usually not 'hard' enough to demonstrate your competency. I can understand his point that people knowing C and C++ and that are actually good at it know a lot more about what goes on under the hood than people who, say, program in Ruby (and only in Ruby). I'd say it goes like this: if you know a "hard" language, it's probably a good proof that you're able to program while respecting severe constraints or that you master complex ways of thinking. If you're good at a high level language, you might as well be able to program while respecting severe constraints, but there's no proof of it. I don't think learning C or C++ will damage your brain (some people seem to believe this though). Actually, learning it just to appreciate better higher-level languages might be a good idea.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/41990", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/4091/" ] }
42,036
I am working on a C# .NET Application, where I have a Form with lots of controls. I need to perform computations depending on the values of the controls. Therefore, I need to pass the Form values to a function and inside that function, several helper functions will be called depending on the Control element. Now, I can think of two ways to pass all the Form values: i) Save everything in a Dictionary and pass the Dictionary to the function or ii) Have a class with attributes that corresponds to each of the Form element. Which of these two approaches , or any other, is better?
Joel Spolsky (yeah, that Joel) argued a while back that real tough programmers know how to use harder languages (like C, C++ and Lisp) and their constructs (like pointers and functional features), and that higher-level languages were usually not 'hard' enough to demonstrate your competency. I can understand his point that people knowing C and C++ and that are actually good at it know a lot more about what goes on under the hood than people who, say, program in Ruby (and only in Ruby). I'd say it goes like this: if you know a "hard" language, it's probably a good proof that you're able to program while respecting severe constraints or that you master complex ways of thinking. If you're good at a high level language, you might as well be able to program while respecting severe constraints, but there's no proof of it. I don't think learning C or C++ will damage your brain (some people seem to believe this though). Actually, learning it just to appreciate better higher-level languages might be a good idea.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/42036", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/1560/" ] }
42,110
In Git it's possible to set and enforce a good commit template. Can you recommend (preferably with argumentation) a good commit template / guidelines to enforce in the company?
I use [Abc]: Message. With Add, Mod(ify), Ref(actoring), Fix, Rem(ove) and Rea(dability) then it's easy to extract logfile. Example : Add: New function to rule the world. Mod: Add women factor in Domination.ruleTheWorld(). Ref: Extract empathy stuff to an abstract class. Fix: RUL-42 or #42 Starvation need to be initialised before Energy to avoid the nullpointer in People. Rem: freeSpeech is not used anymore. Rea: Removed old TODO and extra space in header. If I have more than a line, I sort them with most important first.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/42110", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/13873/" ] }
42,200
Possible Duplicate: Version control for independent developers? I've heard statements to the effect of: "Well it's just me working on this project so I don't need to put it under source control" as well as, "There is no need to work version controlled on this project, it's so small". It is my opinion that no matter how small the project is, so long as it's adding value to the client (and they are paying for it too) that we, the developer(s), should version control it; especially since its company policy. Am I insane or does my standpoint make sense. Question: Should development work always be version controlled?
Oh wow, yes. I use both SVN and Git and I cannot tell you how many times they've saved my ass. More Git than SVN, but let's not start flamewars here. This is on projects I work on by myself, as well as projects I work on with other people. No excuse not to, really. As a human, I'm basically entitled to do stupid shit all the time. By using version control, I can merrily carry on my way doing awesome things and committing at intervals where it makes sense. When I do something incredibly stupid then I can simply rollback my changes to the last point where I committed. Even with Git, I can roll back specific chunks of changes to a file rather than the whole file. My version-control (ab?)using work flow, as a Rails developer (and yes, I know you use C#, same flow applies. Just sub my git commands for your tfs commands), goes like this for a brand new project: Grab a new feature off Pivotal Tracker and figure out what the hell it's supposed to do. Also known as Client-to-English translation. Create a brand new directory for the project and then immediately : git init git add . git commit -m "Initial setup for [project]" git remote add origin [email protected]:radar/project.git git push origin master I now have a Git repository ready for me to commit to, with a master branch. This master branch should always remain "pure". The tests should always be 100%-passing-no-excuses-or-somebody's-going-to-get-fired-slash-very-badly-injured-did-I-mention-no-excuses on this branch. If the feature is adequately complex enough (taking me longer than an hour or two or if the change is going to be more than a single, sensible commit) I will create my own branch using git checkout -b [feature-name] , if not I will work on master. In this branch, I can do whatever the hell I like. master 's still going to be "pure" and I can effectively trash the place and then git checkout . to get it all back. It's in this branch that I develop the new feature, making incremental, sensible commits along the way. Made a page where a user can fill in a form and then something to handle that form? That's a commit. Added a new function to a class and tested it? New commit. I may be inclined to push this branch up somewhere so other people can work with me on it, in which case I would git push origin [feature-name] and then they could clone the repository and git checkout origin/[feature-name] -b [feature-name] to get my changes and we could work together on it. When I'm done with the feature, I run the tests on the [feature-name] branch. Then, I can go back into the master branch, make sure everything's still "pure" by running the tests, and then git merge [feature-name] to merge the branch into master. Then I run the tests again to make sure it's still "pure" (remember, no excuses) and finally I push my changes to the master branch on GitHub. Rinse, repeat. Without version control, I would be utterly lost. I would do stupid shit and then spend quite a lot of time manually rolling it back and not being sure if I've got it all or not. Version control is a great tool to prevent stupidity (as is testing, but that's a tangential topic) and I really, really strongly encourage it. No excuses.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/42200", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/320188/" ] }
42,269
OK, I feel stupid asking this - but in Jeff's article: Getting the Interview Phone Screen Right and originally stated in the 5 essential phone screen questions : They shouldn't stare blankly at you when you ask with 2^16 is. It's a special number. They should know it. I've been a developer\software engineer\code monkey\whatever for a little while now, and I don't think I've ever come across this. I mean, I can certainly count binary values do basic operations on them, etc, etc. But I don't see what is "special" about this value.
(2 16 - 1) or 65535 or 0xFFFF or "64k" is the maximum value of 2 bytes. For a long time CPUs used 16-bit architecture and OSes were likewise based on 16-bit operations and "words" . There were 16-bit commands and 16-bit memory addresses. A lot of systems/compilers still use 16 bits for integers. So, (2 16 - 1) is special because it is the largest number that a 16-bit (unsigned) integer can hold and the largest memory address that a 16-bit architecture can access.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/42269", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/5916/" ] }
42,620
This question is extremely subjective and open-ended. It might even sound like something I should just research for myself and make my own decision. But I'd like to put it out there and get some thoughts from others. Long story short - I burned out with the rat race and am on a self-funded sabbatical this year. Much of it is to take a break from the corporate grind and travel around, but I also want to play around with new technologies and do some self-learning projects, to stay up to speed on programming, and well - I just love tinkering with programming, when there's no pressure! Here's the thing: I am a lifetime C/C++/Java programmer. I'm a bit of a squiggly bracket snob since I've been working with this family of languages for my entire programming career. So I'd like to learn a language which isn't so closely syntactically related to this group. What I'm basically looking for is a language which is relatively general purpose, fun to learn, has some new concepts that are different from C++/Java, and has a good community. A secondary consideration is that it has good web development frameworks. A tertiary consideration is that it's not totally academic (read: there are real world jobs out there using it). I've narrowed it down to Ruby or Python. My impression of Ruby is that it is extremely web oriented - that the only real application of it is as a server side scripting language for doing web stuff (mainly Ruby on Rails). I don't have much of an impression of Python at all, except that it seems to have a passionate fan base and appears to be a fairly versatile language. TL;DR and to put it as succinctly as possible: which of these would be better for a C++/Java guy to learn to get some new perspectives on programming? And which is more open and general purpose and applicable to a wider set of applications? I'm leaning towards Ruby at the moment, but I worry to an extent that it looks like it's used as nothing but a server side web language.
Don't let the fact the Ruby rose into the common parlance largely because of Rails (the web application framework) fool you. It is a general-purpose programming language, and you can use it for anything that you can use any other language for. Play around with Ruby and see if you fall in love with it. You either will or you won't. It's kind of like the Grateful Dead's music; you either love it or you can't stand it. Ruby will stretch your brain. In many respects, it is as far from C++/Java as you can get. I come from a C and C# background, and I found Ruby's dynamicness and meta-programming power to be quite intoxicating. That being said, Python is an absolutely outstanding language, and it'll bust you out of your curlybracketness. Why not learn both? I use both on a regular basis: Ruby for programming with Rails and Python for working with Google AppEngine.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/42620", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/5064/" ] }
42,628
What is the recommended workflow to learn HTML5? What tools should I install? What SDK? Where to start? How to test? How to debug? What do I read? I understand that what is often labelled as "HTML5 development" is in fact a mixture of HTML, CSS, JS and more, however I don't believe that bigger projects are developed in Notepad. That is why I am asking you to reveal your tips and tricks about your workflow.
HTML5 is not a single integrated thing. It's a collection of extensions to HTML, some of which are widely-implemented and can be used safely, some of which no-one implements yet, and a whole lot in-between. If you try to treat HTML5 as a coherent single development platform and ‘learn it all’ you will have a really difficult time. Instead what you need to learn is the web as a whole: basic HTML, CSS, JavaScript, the Core DOM, the HTML DOM, the basic Browser Object Model. Then you can add features of the New Web as and where you need them, and browser support allows: HTML5 extensions, CSS3 properties, canvas drawing, websockets, the other DOM and BOM extensions spun off from the HTML5 work... The feature set of the web is constantly evolving and there is not one single point of reference. W3Schools (which is nothing to do with W3C incidentally) tries, but it's chock-full of errors. Don't trust what it says as gospel. You may need to refer to the definitive HTML4 , CSS2 . DOM Core and DOM HTML specs to make sure. You will also probably want to look at MDC's DOM reference and MSDN's DOM reference for what Firefox and IE support. The HTML5 spec contains a lot of more up-to-date DOM stuff too, as well as the new HTML extensions, but it is a long and unwieldy document, quite hard to use even by the standards of standards documents. Although not nearly as bad as the impenetrable ECMAScript spec. (Thankfully you will probably be familiar with a lot of that already if you're used to working with ActionScript.) You don't need an SDK or IDE to develop HTML/CSS/JS. You can use an IDE if you like, but I'm quite happy doing everything in my favourite text editor. There are no build/compile steps to worry about, you just save your file and hit reload, job done. Most modern web browsers have a debugger and other development tools either built in (eg IE8) or readily available as extensions (eg Firebug ).
{ "source": [ "https://softwareengineering.stackexchange.com/questions/42628", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/15331/" ] }
42,767
I am not a native English speaker. Is there some trick in this name? What does it mean or imply? PS. I don't know whether this is the right SE site to post this question but if it is not could you indicate me?
From transmogrify - "transform, especially in a surprising or magical manner"
{ "source": [ "https://softwareengineering.stackexchange.com/questions/42767", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/3708/" ] }
42,817
Are there any disadvantages in tying my application to Spring framework? I'm not talking about bugs or issues like that, if any. I'm talking about strategic, architectural things that will influence my application lifecycle. Should I prefer Spring over Java EE core features supported by EE container? What are the advantages?
Since other posts here mention the upside, I'll mention the negative aspects of Spring. Even with these negatives, Spring is omnipresent in its niche, reliable, and works as advertised. So, on to the negatives: Enormous: I wouldn't like to put jars with 3000 classes into my little hobby project. There are some descriptions of it as slower than some other DI frameworks like Guice or Pico. anecdotal, and probably not too important. Dealing with some parts of spring can be frustrating when they're not well documented parts of the core and documentation you do find diverges across the multiple major versions. As a side effect of its size, be prepared to spend joyous hours digging through piles of classes that while logically named, start to merge together into a gigantic pile of nouns when you're tired ("sure, you just connect your TransactionAwareConnectionFactoryProxy to your UserCredentialsConnectionFactoryAdapter to your .. zzzzzzzz "). To be fair, they really are logically named, it's a reasonable response to the size of the framework, but still. Possibly as a result of this adaption of new pieces, spring can be slow, at least for me. Not so much with core spring, but there are connectors for just about everything, and they're not all as well documented as they could be, and that's when you start wading through noun soup. Again, it's really all just in response to its size.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/42817", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/15368/" ] }
42,863
In C#, am I encouraged to use the all-purpose var keyword for every variable declaration? If yes, do I have to mention those special characters for literal values within the variable declaration like the M for decimal in the following statement: var myDecimal = 14.5M; If it makes a difference, I'm trying to do some web development with C#.
There has been a lot of dispute over the use of var. My general rules are the following. When the type is obvious such as when the right hand of the assignment is a constructor, use var. When the type is complex to write, such as a LINQ query (the reason for var in the first place) use var. For ambivalent types (your Decimal being an example) where you want to make sure that your variable is correctly typed, spell it out. Anonymous types have to use var. In all other cases spell out the type. Basically, the goal is to make it easier to read the code. If you feel that var suffices because the assignment is obvious, use var. Use the full type name as a hint for the reader when you feel it's necessary.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/42863", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/6651/" ] }
42,895
It seems that off-by-one errors are one of the most (if not the most) common programming errors (see https://softwareengineering.stackexchange.com/questions/109/what-are-common-mistakes-in-coding , and conventional wisdom). What is the reason these are so common, is it something to do with how the human brain works? What can we do to prevent falling prey to the off by one errors?
It sort of is something to do with how the human brain works. We're wired to be "good enough" for tasks that don't usually require engineering-grade precision. There's a reason why the cases we have the most trouble dealing with are called "edge" cases. Probably the best way to avoid off-by-one errors is encapsulation. For example, instead of using a for loop that iterates a collection by index (from 0 to count - 1), use a for-each style loop with all the logic of where to stop built into the enumerator. That way you only have to get the bounds right once, when writing the enumerator, instead of every time you loop over the collection.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/42895", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/3542/" ] }
42,938
You ever try to implement something simple but for some strange reason it doesn't work. So you try a possible solution but then something else doesn't work. You keep trying different workarounds but every time something different isn't working. Every time you get one step closer you also get one (or more) step farther from solving this problem and its now been 3 hours when this should have taken you 10 minutes. And it still isn't solved. There is no one in your company who can help, and you are about to put your fist through your screen. At this point you are so frustrated you can no longer think about the problem clearly. What should you do at this point? Or what can you do to avoid reaching this point?
Although this is a real problem, it isn't specific to programming. However, it is IMHO so important that it deserves a place on this forum. My suggestions: have a break . Go for a walk, meditate, sleep, do physical activity* - do something completely different to allow your brain to relax and get out of the mental rut , while letting your subconscious work on the problem in peace. Usually it delivers results surprisingly fast - it just needs to let you know about it. But while your conscious mind is desperately repeating the same cycles of thoughts over and over again, it won't be able to listen to anything else. what can you do to avoid reaching this point? Relaxation and mindfulness techniques are a key to get over the stress reactions and allow your mind to focus clearly. And practicing these really pays off. When someone is experienced in these, (s)he can already notice the stress level rising before the frustration could take over . Then one can interrupt the cycle of thoughts e.g. by taking a few deep breaths, or doing a couple of minutes relaxation practice. This may be all what is needed at that point. * kiss your partner, caress your pet - suggestions from my wife :-)
{ "source": [ "https://softwareengineering.stackexchange.com/questions/42938", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/1785/" ] }
43,067
Disclaimer : I know perfectly well the semantics of prefix and postfix increment. So please don't explain to me how they work. Reading questions on stack overflow, I cannot help but notice that programmers get confused by the postfix increment operator over and over and over again. From this the following question arises: is there any use case where postfix increment provides a real benefit in terms of code quality? Let me clarify my question with an example. Here is a super-terse implementation of strcpy : while (*dst++ = *src++); But that's not exactly the most self-documenting code in my book (and it produces two annoying warnings on sane compilers). So what's wrong with the following alternative? while (*dst = *src) { ++src; ++dst; } We can then get rid of the confusing assignment in the condition and get completely warning-free code: while (*src != '\0') { *dst = *src; ++src; ++dst; } *dst = '\0'; (Yes I know, src and dst will have different ending values in these alternative solutions, but since strcpy immediately returns after the loop, it does not matter in this case.) It seems the purpose of postfix increment is to make code as terse as possible. I simply fail to see how this is something we should strive for. If this was originally about performance, is it still relevant today?
While it did once have some performance implications, I think the real reason is for expressing your intent cleanly. The real question is whether something while (*d++=*s++); expresses intent clearly or not. IMO, it does, and I find the alternatives you offer less clear -- but that may (easily) be a result of having spent decades becoming accustomed to how things are done. Having learned C from K&R (because there were almost no other books on C at the time) probably helps too. To an extent, it's true that terseness was valued to a much greater degree in older code. Personally, I think this was largely a good thing -- understanding a few lines of code is usually fairly trivial; what's difficult is understanding large chunks of code. Tests and studies have shown repeatedly, that fitting all the code on screen at once is a major factor in understanding the code. As screens expand, this seems to remain true, so keeping code (reasonably) terse remains valuable. Of course it's possible to go overboard, but I don't think this is. Specifically, I think it's going overboard when understanding a single line of code becomes extremely difficult or time consuming -- specifically, when understanding fewer lines of code consumes more effort than understanding more lines. That's frequent in Lisp and APL, but doesn't seem (at least to me) to be the case here. I'm less concerned about compiler warnings -- it's my experience that many compilers emit utterly ridiculous warnings on a fairly regular basis. While I certainly think people should understand their code (and any warnings it might produce), decent code that happens to trigger a warning in some compiler is not necessarily wrong. Admittedly, beginners don't always know what they can safely ignore, but we don't stay beginners forever, and don't need to code like we are either.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/43067", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/3684/" ] }
43,081
I know basic stuff like, what are beans, jsp, servlet, jsf and how this stuff should work together. I know how to make basic jsp page with database query for example. Now I need to know what is the best path to learn all this stuff. My plan is to learn in this order: jsp (including persistance and JSTL) servlets + beans jsf The jump to frameworks (hibernate, struts, spring, etc) Also I'm not exactly sure about JSF, is it a must to make great pages or is it just a convenience to know?
While it did once have some performance implications, I think the real reason is for expressing your intent cleanly. The real question is whether something while (*d++=*s++); expresses intent clearly or not. IMO, it does, and I find the alternatives you offer less clear -- but that may (easily) be a result of having spent decades becoming accustomed to how things are done. Having learned C from K&R (because there were almost no other books on C at the time) probably helps too. To an extent, it's true that terseness was valued to a much greater degree in older code. Personally, I think this was largely a good thing -- understanding a few lines of code is usually fairly trivial; what's difficult is understanding large chunks of code. Tests and studies have shown repeatedly, that fitting all the code on screen at once is a major factor in understanding the code. As screens expand, this seems to remain true, so keeping code (reasonably) terse remains valuable. Of course it's possible to go overboard, but I don't think this is. Specifically, I think it's going overboard when understanding a single line of code becomes extremely difficult or time consuming -- specifically, when understanding fewer lines of code consumes more effort than understanding more lines. That's frequent in Lisp and APL, but doesn't seem (at least to me) to be the case here. I'm less concerned about compiler warnings -- it's my experience that many compilers emit utterly ridiculous warnings on a fairly regular basis. While I certainly think people should understand their code (and any warnings it might produce), decent code that happens to trigger a warning in some compiler is not necessarily wrong. Admittedly, beginners don't always know what they can safely ignore, but we don't stay beginners forever, and don't need to code like we are either.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/43081", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/11532/" ] }
43,151
Should you sacrifice code readability with how efficient code is? e.g. 3 lines of code into 1 line. I read in Code Craft by Pete Goodliffe that readability is key. Your thoughts?
"Fewer lines" isn't always the same thing as "more efficient". I assume you mean "Should a program be made shorter at the expense of readability". Programs must be written for people to read, and only incidentally for machines to execute. -Abelson & Sussman, Structure and Interpretation of Computer Programs In general, I think it's more important that a program be easily understood than for it to be short. I should note though, that making a program shorter often also makes it more readable (there's the obvious threshold you get to when your code starts looking like line noise, but up to that point, expressing something more succinctly seems to make it clearer). There are specific exceptions (like your personal shell scripts, or one-of data munging code) that no one will ever need to maintain, and only you will ever need to read. In that situation, it's probably ok to sacrifice some readability for expedience as long as you can still understand it.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/43151", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/14954/" ] }
43,329
So it's obvious that a string of things is a sequence of things, and so a sequence of characters/bytes/etc. might as well be called a string. But who first called them strings? And when? And in what context such that it stuck around? I've always wondered about this.
Can we get a real citation before Hugo's 1963 reference? Yes. John McCarthy used the word "string" in Recursive functions of symbolic expressions and their computation by machine, Part I , from April 1960. For atomic symbols, we shall use strings of capital Latin letters and digits… but more usefully for this question, a reference to a string as a datatype: Any string of admitted characters [is] an L-expression. That's not a great example. By saying "a string of characters", McCarthy is not using "string" in the specialised sense programmers mean it today. You can easily say "a string" to a Java programmer and they'll know that the "of characters" part is implicit: McCarthy's use doesn't demonstrate this feature. Let's try to push back to the 1950s, to see whether McCarthy was playing it safe or whether the term genuinely didn't exist then. LISP probably isn't going to help much here, as it's based on a mathematical calculus so McCarthy's string functions may have been the first application of the idea to string processing. An important string-processing system of the 1960s was 1964's SNOBOL, A String Manipulation Language . This cites McCarthy's paper above, but also discusses COMIT and SCL. The work they cite on SCL is unpublished: an unfortunate dead end. COMIT is easier to track down. The Art of Computer Programming (Volume 1, third edition, p.461) tells us that V. H. Yngve wrote a 1963 CACM article on it. But I'm looking for the earliest use of "string", so I'll do an author search for earlier publications. The first I can find is A Programming Language for Mechanical Translation , from July 1958. This only contains one use of the word "string": Each continuous string of letters between punctuation marks or spaces is looked up in the dictionary. Again, this use is like McCarthy's: this is not evidence for "string" being used in its present day sense. Looking at the paper in detail, we see that the data structure is a "line" on a card (allowing for continuations for longer "lines"). OK, we'll move forward in COMIT's history and see what we can get. The first useful reference is The COMIT system for mechanical translation , from the proceedings of a June, 1959 conference. If we want to replace D SIN(F) by COS(F) D (F), where F is unrestricted and may be any arbitrary sequence of constituents, we use the notation $ to stand for this string. This seems more akin to the way we use it today: "string" stands alone and as a bonus has a recognisable special symbol: the dollar sign is still used in some BASIC flavours to signify a string variable. From around this time, the word "string" also appears many times in A command language for handling strings of symbols by Perlis and Smith from the ACM '58 Proceedings, and once in The Share 709 System: Machine Implementation of Symbolic Programming by Boehm and Steel. Searching the ACM digital library for 'string' in the early 1960s yields 62 results, including titles like "String handling in ALGOL", "String Manipulation in the New Language" and "A list-type storage technique for alphanumeric information". It seems that the idea has become entrenched by then. I would argue that "string" in its computer science jargon sense as an ordered list of characters became common over a couple of years around 1960. Before that, authors like Yngwe and McCarthy could say "string of characters" and be sure that they were understood, but could not use "string" as a bare word in the sense it's used today. The shorthand was probably introduced to the computing mainstream by the Perlis and Smith paper. It hasn't been widely cited, but one important citation is Syntactic and semantic augments to ALGOL by Joseph W. Smith in April 1960 (in the same issue of CACM as McCarthy's description of LISP). From that paper: The purpose of this paper is to propose a set of syntactic and semantic augments to ALGOL. The proposed extensions are designed to facilitate the description of "string" manipulation in that language; they do not constitute a comprehensive language for symbol manipulation. To me, this constitutes evidence of "string" meaning a datatype for symbolic computation being affirmed in the academic lexicon, and importantly introduced to the tools used for commercial computation. Incidentally, Programming Languages: History and Future by Jean Sammet (1972) suggests that COMIT and SNOBOL were the progenitors of string manipulation, so I'm fairly confident that there won't be earlier examples.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/43329", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/15510/" ] }
43,347
I have a class that will read from Excel (C# and .Net 4) and in that class I have a background worker that will load the data from Excel while the UI can remain responsive. My question is as follows: Is it bad design to have a background worker in a class? Should I create my class without it and use a background worker to operate on that class? I can't see any issues really of creating my class this way but then again I am a newbie so I figured I would make sure before I continue on. I hope that this question is relevant here as I don't think it should be on stackoverflow as my code works, this just a design issue.
Should I create my class without it and use a background worker to operate on that class? Yes, you should. And I will tell you why - you are violating the Single Responsibility Principle . By tightly coupling the class that accesses the excel doc with how it accesses the excel doc, you eliminate the ability for the "controller" code (any code that uses this) to do it a different way. How different, you may ask? What if the controller code has two operations that take a long time but wants them to be sequential? If you allowed the controller the ability to handle the threading, it can do both long-running tasks together in one thread. What if you want to access the excel doc from a non-UI context and don't need it to be threaded? By moving the responsibility of threading out to the caller, you allow more flexibility of your code, making it more reusable.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/43347", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/7427/" ] }
43,356
DeMarco and Lister (Peopleware) suggest you create a "cult of quality" within your programming team. Frustratingly, they don't suggest how you go about doing that! Anyone got any thoughts on how to accomplish this?
My experience is that development teams (but in general, any team) consist of 3 types of people: those with a built-in drive for quality, those who are only in it for the money (beer / girls / whatever) and couldn't care less however you try to motivate them, the "mediocre" ones (for lack of a better word). The last group is the largest, and they tend to follow the ruling party. If there are enough quality people in the team, they can pull the majority with themselves, creating a strong upward spiral in team spirit and motivation. However, if there are too many slackers, they can easily create the opposite effect, a spiral of death. So the foremost task for the manager is to choose and keep the right people and get rid of the bad ones asap . Not the "mediocre" ones though - they may be influenced to start improving, to lend support for others' good ideas, and some of them eventually might even become positive trend setters on their own right. [Update2] reflecting on Alb's answer : IMO there's no need for the quality developers to be in clear majority within the team (although it doesn't hurt :-). There is a "trend setting threshold" , above which the views and behaviour of a subgroup can quickly become the "mainstream" within a community , so other people take notice and start to follow. You can see this in work in the larger society all the time (e.g. (non)smoking habits, health & diets, pop fads, organic food). My very rough estimation is that it can be somewhere around 25-30%, but it depends on a lot of factors. This is where the bad people can hurt a lot. Even a couple of bad people within your team can raise that threshold significantly. [/Update2] Of course it is not always possible to hire enough of the top guys. So when the first faction is not strong enough to drive things on their own, management needs to help them. A couple of thoughts on this: I think that Scrum has a good idea for this with product demos. Demonstrating the feature you implemented in front of an audience consisting not only of your teammates but possibly developers from other teams, management, even users of the app can be a huge source of pride and also a strong factor to help the team jell. Another thing is for management to listen in earnest to the dev team regarding quality. DeMarco and Lister even mention that there are companies/departments where dev teams have a veto over what can go to production. If they feel the app is not yet ready for prime time, they can postpone the release regardless of what management would like. Now that is tough for management, but I can imagine that it builds team spirit and strongly communicates the message that quality is really important here, not just on the level of words. This leads to the next point: to create a "cult of quality", management must thoroughly understand what most experienced developers already know: that quality is not an afterthought - it has to be built into the product from the start. So people should be encouraged to (and rewarded for!) thinking about long term maintainability, striving for the good solutions, instead of the quick ones. Update @Machado in his comment gave a new twist to the question (to me at least): What I, as a team-member, not a manager, can do to improve my team's code quality ? A few thoughts: Keep learning and spread the knowledge around to anyone who listens. Learn and use the best practices within your areas of expertise. Take pride in your work . These two will almost naturally let you become a positive role model for others - especially newcomers and juniors -; be conscious about this, and leverage your role for the benefit of the whole team. The best way to influence others is by positive example. Look not only at the code, but at the whole process of developing software; keep asking questions and providing feedback to optimize the development process . And last but not least: find a place where you can be a "top guy" . If you are in the "mediocre" group right now, strive to develop yourself - hopefully the above ideas help in that. But if you happen to be in the "lower strata" in your current team, I recommend you to analyse the reasons. What is it that demotivates you? Bad working conditions? Teammates? Management? Type of work? And what is it that excites and interests you? You may need to talk to your coworkers and/or boss about it. Or you may need to look for a better job - or even a new profession - where you can start shining. It is really not worth spending a significant part of one's life with unsatisfying or depressing activity. It may also be that you are forced to continue your current, suboptimal job due to external factors (lack of better job opportunities, need to pay the bills etc.) - it happens every now and then. Even in this case, try to make the best out of it. Producing quality work (as much as circumstances allow it) is a reward in itself, which helps keep your self esteem up, and keep yourself sane and open in the long run. Thus when an opportunity for something better appears, you are better prepared to take it.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/43356", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/8400/" ] }
43,409
My friend is a project manager for a software company. The most frustrating thing for him is that his engineers frequently leave their jobs . The company works hard to recruit new engineers, transfer projects, and keep a stable quality product. When people leave, it drives my friend crazy. These engineers are quite young and ambitious, and they want higher salaries and better positions. The big boss only thinks about it in financial terms, and his theory is that “ three newbies are always better than one veteran ” (which, as an experienced engineer, I know is wrong). My friend hates that theory. Any advice for him?
When an organization has higher-than-usual turnover, there's ALWAYS a reason and it is ALWAYS management. If the only way an engineer can get a raise is to change jobs, he'll do it. If the only way an engineer can get better working conditions is to change jobs, he'll do it. If the only way an engineer can see his wife and kids occasionally is to change jobs, he'll do it. Tell your friend to LOOK IN THE MIRROR. The answers he seeks will be found there.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/43409", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/12697/" ] }
43,460
Managing other programmers while you are yourself a part of the programming work force. It's a very common scheme, at least in the companies I worked for. Can you be a good programmer or a good manager if you do both at the same time? I'm questioning the effectiveness of an individual that has to be in two very different roles, requiring very different skills, environment, concentration, organization, etc. UPDATE : my question includes management of the company (which is my case), not specifically team management. But I'm interested in both of course.
It depends on the amount and type of programming you are required to do and the amount and type of managerial duties you have to perform. Being a manager means lots of interruptions, changes of tack and things like meetings etc. If your programming is "limited" to small pieces of non urgent work then you can fit these in around your managerial duties. If you need to spend significant amounts of "quality" time on a programming task then you aren't going to get that time due to your managerial responsibilities. If your team is large and/or complex then you are going to need to spend more time managing than you would if it were a small team dedicated to one one or two products/projects. You'll find that you just don't have the time to do any meaningful programming - even on small tasks. In a previous job I had this role and it worked for me because I kept my programming tasks small. It actually worked to our advantage. Firstly, I could assess all the requests that came in and if they were small add them to my queue (which was always short) or get back to the client (in this case another manager) with a more accurate timescale for when the work would be done. Secondly it meant that the developers on the team weren't getting constantly pulled off their current work to fix minor bugs or do small enhancements. Thirdly, the clients were happy as their urgent problems were fixed fairly rapidly. It kept me in touch with the code base so I could have meaningful conversations with my team about problems and with my managers and clients about timescales without having to get the team involved all the time.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/43460", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/-1/" ] }
43,528
I'm graduating with a Computer Science degree but I see websites like Stack Overflow and search engines like Google and don't know where I'd even begin to write something like that. During one summer I did have the opportunity to work as a iPhone developer, but I felt like I was mostly gluing together libraries that other people had written with little understanding of the mechanics happening beneath the hood. I'm trying to improve my knowledge by studying algorithms, but it is a long and painful process. I find algorithms difficult and at the rate I am learning a decade will have passed before I will master the material in the book. Given my current situation, I've spent a month looking for work but my skills (C, Python, Objective-C) are relatively shallow and are not so desirable in the local market, where C#, Java, and web development are much higher in demand. That is not to say that C and Python opportunities do not exist but they tend to demand 3+ years of experience I do not have. My GPA is OK (3.0) but it's not high enough to apply to the large companies like IBM or return for graduate studies. Basically I'm graduating with a Computer Science degree but I don't feel like I've learned how to program. I thought that joining a company and programming full-time would give me a chance to develop my skills and learn from those more experienced than myself, but I'm struggling to find work and am starting to get really frustrated. I am going to cast my net wider and look beyond the city I've grown up in, but what have other people in similar situation tried to do? I've worked hard but don't have the confidence to go out on my own and write my own app. (That is, become an indie developer in the iPhone app market.) If nothing turns up I will need to consider upgrading and learning more popular skills or try something marginally related like IT, but given all the effort I've put in that feels like copping out.
Best way to learn to program is to write programs. Two suggestions : develop a game develop a web site Algorithms, while useful, and should be understood, actually play second fiddle to software design. TDD / Design Patterns / Architecture / Refactoring / Unit Testing / The process of putting code together / etc tend to be far more important skills. Also, far better to do this in your own time. Don't wait to work this stuff out on the job. I find the people who tend to do better are the ones who early in their careers put the effort in to develop their skills in their own time. Usually because they are genuinely passionate about software development One more thing is to " Read books and samples " and don't be ashamed to ask. If you want to learn you should ask :)
{ "source": [ "https://softwareengineering.stackexchange.com/questions/43528", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/-1/" ] }
43,729
When learning new languages, is it better to commit yourself to one language 100% and wait until you've "mastered" it to move on to another, or to work on learning different languages at the same time? If it matters, I'm currently learning C++ and I'd like to know Java and Python as well.
Not only do I not see a problem with learning multiple languages at one time, I think it can directly benefit you in that something in one language may clear up a problem area you may have in another language. For example, my main language is C# and when dealing with LINQ I used the SQL format (which is fine, I’m not knocking it at all, but it just didn’t seem like the “cool way” to do it). I stayed away from Lambdas because, to be honest, I didn’t understand them and the C# documentation that I saw didn’t provide a clear definition (for me). Then I started to pick up some books on F# and started learning that, which gave me the “ah ha!” moment of understanding how Lambdas are. I have found that as I have experienced more and more with other languages (and frameworks) I have become better at C# (and ASP.Net). That is why I believe that learning multiple languages at one time isn’t a bad thing at all!
{ "source": [ "https://softwareengineering.stackexchange.com/questions/43729", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/8810/" ] }
43,948
This is a question that I often ask myself when working with developers. I've worked at four companies so far and I've become aware of a lack of attention to keeping code clean and dealing with technical debt that hinders future progress in a software app. For example, the first company I worked for had written a database from scratch rather than use something like MySQL and that created hell for the team when refactoring or extending the application. I've always tried to be honest and clear with my manager when he discusses projections, but management doesn't seem interested in fixing what's already there and it's horrible to see the impact it has on team morale. What are your thoughts on the best way to tackle this problem? What I've seen is people packing up and leaving. The company then becomes a revolving door with developers coming in and out and making the code worse. How do you communicate this to management to get them interested in sorting out technical debt ?
When I met with my boss to discuss this, he said I should include refactoring in all my estimates. He said it's not a problem he wants to think about. Instead, I should handle it. This isn't a problem that management in general wants to think about. They aren't the engineers, you are. Just make this an unspoken part of all of your estimates, and you'll find that the technical debt decreases. It will never be perfect though. Technical debt, like credit card debt, is an investment in getting customers faster and gaining market share over your competitors faster. Like credit, if managed properly, it can make you quite successful.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/43948", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/15455/" ] }
44,145
I'd like to throw this question out there to interestingly see where the medium is. I'm going to admit that in my last 12 months, I picked up TDD and a lot of the Agile values in software development. I was so overwhelmed with how much better my development of software became that I would never drop them out of principle. Until...I was offered a contracting role that doubled my take home pay for the year. The company I joined didn't follow any specific methodology, the team hadn't heard of anything like code smells, SOLID, etc., and I certainly wasn't going to get away with spending time doing TDD if the team had never even seen unit testing in practice. Am I a sell out? No, not completely... Code will always been written "cleanly" (as per Uncle Bob's teachings) and the principles of SOLID will always be applied to the code that I write as they are needed. Testing was dropped for me though, the company couldn't afford to have such a unknown handed to the team who quite frankly, even I did create test frameworks, they would never use/maintain the test framework correctly. Using that as an example, what point would you say a developer should never drop his craftsmanship principles for the sake of money/other benefits to them personally? I understand that this can be a very personal opinion on how concerned one is to their own needs, business needs, and the sake of craftsmanship etc. But one can consider that for example testing can be dropped if the company decided they would rather have a test team, than rather understand unit testing in programming, would that be something you could forgive yourself for like I did? So given that there is something you would drop, there usually should be an equal cost in the business that makes up for what you drop - hopefully, unless of course you are pretty much out for lining your own pockets and not community/social collaborating ;). Double your money, go back to RAD? Or walk on, and look for someone doing Agile, and never look back...
Since I got addicted to unit tests more than 10 years ago, in the majority of my workplaces I was the first who has ever heard about these. Nevertheless I kept writing my little unit tests whenever I could, and estimating the cost of unit testing into my tasks. Whenever someone asked about my coding habits, I told what I was doing and why it worked for me. Usually at least some of the people were interested, and eventually I got to give presentations on the topic and mentored people to write their first unit tests. You don't need to start convincing people about the agile way the first day at your new workplace. Just follow the principles in your own work as much as you can. If you do it well, you will deliver better code. If your coworkers and/or management notice it, they will ask how you do it. Then you can tell them. Update Most of the seasoned developers (and managers) have seen trends and fads come and go, so they do not get excited by the latest buzzwords. However, if you can demonstrate that a certain approach (tool, way of thinking) really works in practice, in the actual project , the ones who care about their craft will almost surely sit up and listen. But if you have no such people in your team, maybe it is time to look for a better place...
{ "source": [ "https://softwareengineering.stackexchange.com/questions/44145", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/15177/" ] }
44,160
I recently started as a junior developer. As well as being one of the least experienced people on the team, I'm also a woman, which comes with all sorts of its own challenges working in a male-dominated environment. I've been having problems lately because I feel like I am getting too much unwarranted pedantic criticism on my work. Let me give you an example of what happened recently. Team lead was too busy to push in some branches I made, so he didn't get to them until the weekend. I checked my mail, not really meaning to do any work, and found that my two branches had been rejected on the basis of variable names, making error messages more descriptive, and moving some values to the config file. I don't feel that rejecting my branch on this basis is useful. Lots of people were working over the weekend, and I had never said that I would be working. Effectively, some people were probably blocked because I didn't have time to make the changes and resubmit. We are working on a project that is very time-sensitive, and it seems to me that it's not helpful to outright reject code based on things that are transparent to the client. I may be wrong, but it seems like these kinds of things should be handled in patch type commits when I have time. Now, I can see that in some environments, this would be the norm. However, the criticism doesn't seem equally distributed, which is what leads to my next problem. The basis of most of these problems was due to the fact that I was in a codebase that someone else had written and was trying to be minimally invasive. I was mimicking the variable names used elsewhere in the file. When I stated this, I was bluntly told, "Don't mimic others, just do what's right." This is perhaps the least useful thing I could have been told. If the code that is already checked in is unacceptable, how am I supposed to tell what is right and what is wrong? If the basis of the confusion was coming from the underlying code, I don't think it's my responsibility to spend hours refactoring a whole file that someone else wrote (and works perfectly well), potentially introducing new bugs etc. I'm feeling really singled out and frustrated in this situation. I've gotten a lot better about following the standards that are expected, and I feel frustrated that, for example, when I refactor a piece of code to ADD error checking that was previously missing, I'm only told that I didn't make the errors verbose enough (and the branch was rejected on this basis). What if I had never added it to begin with? How did it get into the code to begin with if it was so wrong? This is why I feel so singled out: I constantly run into this existing problematic code, that I either mimic or refactor. When I mimic it, it's "wrong", and if I refactor it, I'm chided for not doing enough (and if I go all the way, introducing bugs, etc). Again, if this is such a problem, I don't understand how any code gets into the codebase, and why it becomes my responsibility when it was written by someone else, who apparently didn't have their code reviewed. Anyway, how do I deal with this? Please remember that I said at the top that I'm a woman, and I'm sure these guys don't usually have to worry about decorum when they're reviewing other guys' code, but honestly that doesn't work for me, and it's causing me to be less productive. I'm worried that if I talk to my manager about it, he'll think I can't handled the environment, etc.
There is a chance that you're being singled out as a woman, but it's also possible that you're just a junior developer and new to the job. Error-checking and expressive messages are important. If you're going to add something to the code, make sure it's right and up to the team's standards. Similarly, if you're modifying someone else's code, try to improve it where possible -- don't go off rewriting the whole thing, but do try to leave it a little cleaner than you found it. Is there a written version of the coding standards that your team follows? If not, it might be a good idea to write it all down. You can spearhead the effort by writing down the mistakes you make and forming them into a checklist that you can refer to before submitting your changes for review. As a side effect, you can use that written standard to appeal future rejections if they contradict it. It sounds like there may be some lack of understanding between you and the team lead. It might be helpful for you to ask for a one-on-one meeting with him and discuss what you can do to improve. You can lead in with something like "I feel like I'm still missing a lot of nuances of what I should be doing. As a junior developer I want to grow and improve. Can you help me get there?" and see what happens.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/44160", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/15859/" ] }
44,177
Looking back at my career and life as a programmer, there were plenty of different ways I improved my programming skills - reading code, writing code, reading books, listening to podcasts, watching screencasts and more. My question is: What is the most effective thing you have done that improved your programming skills? What would you recommend to others that want to improve? I do expect varied answers here and no single "one size fits all" answer - I would like to know what worked for different people.
In no specific order... Working with people far smarter than myself Always listening to what others have to say, regardless if they're junior, intermediate, senior or guru. job title doesn't mean anything. Learning other frameworks/languages, and seeing how they do things, and compare that to stuff that I already know Reading about patterns, best practices, and then examining my old stuff and applying those patterns where necessary Pair programming Disagreeing with everything Joel says. ;)
{ "source": [ "https://softwareengineering.stackexchange.com/questions/44177", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/4767/" ] }
44,666
There are lot of people who are still working in the same company for more than 10 years. What is the motivation that makes you to stick with current job ?
I've been with the same company for eight of the past ten years. My reasons for staying are (in no particular order) Location - The office is only a couple miles away from my home. This makes the commute time relatively short. When the weather is good, I can bike there in a short amount of time. Pay - The pay is decent. Variety - I get to work on, debug, develop for a variety of areas in the flagship product. As such, I get to learn something new. Telecommuting - Though the office is not far away, my team and manager are. I can work (quite effectively) from home without going in to the office. This helps me spend more time with my family. Specialization - I've invested around ten years specializing in my field. There are not that many local players in it. Passion - I love my job and working in my field. Co-workers - My co-workers are great! Knowledgable, friendly, able and fun. My manager is thrilled with my work, and lets me know. Recognition goes a long way. Hours - Flex time. Risk averseness - I would be dishonest if I did not include this. All family responsibilities have been on my shoulders alone for years now--income, kids, caring for the sick, .... These demand vast amounts of my time and resources. To continue to provide for my family without uninterruption requires a job with the right amount of flexibility (which my present one does). Incidentally, being awesome at my job helps create the conditions that allows that flexibility.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/44666", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/6873/" ] }
44,731
Just went over some slides and noticed that the L1 cache (at least on Intel CPUs) distinguishes between data and instruction cache, I would like to know why this is..
There are actually several reasons. First and probably foremost, the data that's stored in the instruction cache is generally somewhat different than what's stored in the data cache -- along with the instructions themselves, there are annotations for things like where the next instruction starts, to help out the decoders. Some processors (E.g., Netburst, some SPARCs) use a "trace cache", which stores the result of decoding an instruction rather than storing the original instruction in its encoded form. Second, it simplifies circuitry a bit -- the data cache has to deal with reads and writes, but the instruction cache only deals with reads. (This is part of why self-modifying code is so expensive -- instead of directly overwriting the data in the instruction cache, the write goes through the data cache to the L2 cache, and then the line in the instruction cache is invalidated and re-loaded from L2). Third, it increases bandwidth: most modern processors can read data from the instruction cache and the data cache simultaneously. Most also have queues at the "entrance" to the cache, so they can actually do two reads and one write in any given cycle. Fourth, it can save power. While you need to maintain power to the memory cells themselves to maintain their contents, some processors can/do power down some of the associated circuitry (decoders and such) when they're not being used. With separate caches, they can power up these circuits separately for instructions and data, increasing the chances of a circuit remaining un-powered during any given cycle (I'm not sure any x86 processors do this -- AFAIK, it's more of an ARM thing).
{ "source": [ "https://softwareengineering.stackexchange.com/questions/44731", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/8281/" ] }
44,810
I'm really unclear on the difference between C#, C#.NET and the same for ASP and other '.NET' languages. From what I understand, .NET is a library/framework of... things. I think they're essentially access to Windows data such as form elements etc, but that doesn't seem to apply for ASP.NET. In addition, I see people calling themselves '.NET' developers. Does this mean they're fluent in C#, ASP and other languages? Finally, I never see C# without .NET attached. Is C# tied that closely to .NET as to be unusable without it? In summary: what exactly does .NET provide? How does it relate to C# and ASP etc? What does 'a .NET developer' mean? And finally, why do you never see C# without .NET? [As an aside, I realise these are multiple questions, but I think they are very inter-related (or at least that is the impression that browsing Programmers / SO etc has given me)].
I understand your confusion, believe me I have the same perspective when it comes to the Java world! Anyway I'll attempt to break your questions down and tackle them one by one... as well as add some other points in that will hopefully help clarify what's going on: C# and C#.NET are the same thing... C# .NET is, as you say, a library of code that .NET languages can talk to. .NET languages come in different flavours such as: C#.NET, VB.NET, Managed C++, F#. .NET languages compile to CIL ( Common Intermediate Language ) which means they all start "talking" the same language and can therefore interoperate. ASP.NET is the portion of the .NET library used for making web sites. There are other subsections of ASP.NET like WebForms (the old way of making web pages) or the rapidly maturing MVC library that are worth looking at too. Forms (old tech) or the new WPF (Windows Presentation Foundation) are the technologies you'd typically use in .NET to create what you know as traditional desktop applications. One final thing I'd like to finish on is the difference between library and framework. In recent years these two terms have been used as those synonymous, however that is not the case. The easiest way I can think to differentiate the two is: A library contains many pieces of functionality that you may pick and choose from i.e. using one piece of technology doesn't mean you're locked into the rest. This means freedom, however you will have more work cut out for you. A framework however very much sets out how you will be working. It provides a workflow that for better or worse is hard to change. This means rapid development/prototyping, but if significant changes are made in the future it may be impossible (or very time consuming) to implement them. The project you're working on will depend on which choice you make.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/44810", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/12850/" ] }
44,888
I've been programming both in Unix and Windows environments. Mostly I've worked in Unix, where I've learned Unix Philosophy , which can be summarized as Write programs that do one thing and do it well. Write programs to work together. Write programs to handle text streams, because that is a universal interface. There seems to be a clear difference in programming cultures between Unix and Windows worlds, for example: GUI vs CLI Registry vs config files Lots of tools specializing for any given need vs group of generic orthogonal tools which can combined Is there equivalent of "Unix philosophy" in Windows world? What Unix-programmer can learn from Windows or should be aware of when moving to programming in Windows? I would like answers to focus on the best practices of Windows programming (and not a fight between Windows and Unix).
There is actually something like "Windows philosophy". Mostly it is about the composition concept and the user interface part - design programs for users not for other programmers. That means: Simple and intuitive user interfaces Natural workflow Should work out of the box No technical knowledge required there where it is not required Here is a good read: Biculturalism With the proliferation of Windows the hacker approach to coding started to become unfavored. First it was writing C/C++ programs in the most complex and obfuscated manner, so that only the hardest brains could understand them, as a sort of a rite of passage. Under Windows things started to change and that "code style" is now highly unfavored. Not sure if its direct Windows influence or rather the new level of understanding of the code quality, but at least timely they coincide.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/44888", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/14873/" ] }
44,915
As a side project I am developing some sort of DSL where I describe a data model, and generate desired code files from it. I believe this is called Model Driven Architecture . My partial existing implementation uses C#, CodeDOM, XML and XSLT to do this manually. I discovered there already exist better environments to do this in. The one which fascinated me the most is called MPS , which follows the Language Oriented Programming paradigm. This article , written by a cofounder of JetBrains was a real eye opener for me. I truly believe LOP has a very good chance of becoming the next big programming paradigm once it has broader support. From my short experience with MPS, I noticed it is still mainly Java-oriented. My question is, how feasible is it to generate code files for other (multiple) languages instead of just Java. I don't need full language support from the start, so preferably, I need to be able to implement a language in a agile way. E.g. first support only one type, add access modifiers, ... Perhaps some other (free) environment already provides this out of the box. P.S.: I find it important to have a lot of control over the naming conventions and such of the generated code. This is one of the reasons why I started my own implementation. UPDATE: Judging from the answers it seems like people think I'm only interested in .NET solutions. This is not the case, any other suggestions are highly welcomed!
There is actually something like "Windows philosophy". Mostly it is about the composition concept and the user interface part - design programs for users not for other programmers. That means: Simple and intuitive user interfaces Natural workflow Should work out of the box No technical knowledge required there where it is not required Here is a good read: Biculturalism With the proliferation of Windows the hacker approach to coding started to become unfavored. First it was writing C/C++ programs in the most complex and obfuscated manner, so that only the hardest brains could understand them, as a sort of a rite of passage. Under Windows things started to change and that "code style" is now highly unfavored. Not sure if its direct Windows influence or rather the new level of understanding of the code quality, but at least timely they coincide.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/44915", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/15464/" ] }
44,959
A local college is teaching C++ to first year college students (16 years old) with no prior programming experience. As first programming language, is C++ suitable?
Emphatically No . For any goal you have in mind for students, another language or sequence of languages would be faster and better. Examples. "Students need to understand low-level concepts." "Low-level" coding does not consist of getting objects from new , feeding them back to delete , and occasionally having a pointer pointing somewhere it shouldn't. Functions, classes, and templates are not low-level. RAII, 12 ways to use const , std::ostream::operator<< , protected and new are not low-level concepts. Those things have low-level implications and you're skipping those for months or until a future class and teaching mountains of C++ semantics instead. I suggest assembler with a good environment and teaching material like MIPS or MMIX. If you're in a hurry, teach C with detours to at least look at the assembler output. This will give you all the low-level understanding C++ does, and then some, faster. "Students need to understand object-orientation." The object-orientation built into C++ is way overcomplicated for teaching OO concepts, or almost any other high-level concept. See The C++ FAQ for a nice, long list of potential reasons why. You either have to hit all of that stuff, which will take a very, very long time with new programmers; or else you have to skip lots of it, leaving the new programmers in the dark--effectively, not really knowing C++! I suggest learning a simpler, high-level language with objects first (Python, Ruby, Squeak, Common Lisp, Racket), if you must teach C++ at all. Beyond that, learn polymorphism as a concept that is separate from OO by visiting a functional language. "Students need to understand templates and template metaprogramming." No one really asks this, but I wish they would. C++ has nice templates and STL is cool, but they just shouldn't be a high enough priority to teach C++ first. Teaching the OCaml or Haskell type system and then retrofitting those concepts might be faster anyway. "Students need to learn problem solving." Yeah, you get this in any language, and you get more if it in almost any language other than C++ because there's way less baggage. Again, see The C++ FAQ for a list of all the things students will be learning instead of problem-solving skills. "All of the above, and we need to use only one language." or "Employers want it." or "We need a C-style language." or... Teach more than one language. The idea that you save time or energy by teaching or learning just one language is flatly ridiculous. It's based on the idea that learning any given language takes exactly X man months ( HINT! HINT! ) where X is a single number or one number per language. This is nearly identical to the idea that you can save time and money by skipping all that 'requirements' and 'testing' garbage. As for multiple syntaxes, you dangerously cripple students if you teach them to expect the C syntax in every language by making them wildly biased against other languages. Almost any path is faster and better than starting with C++. Learning a simple high-level language and then C++ would be faster. Learning assembler and then C++ would be faster. Anything other than C++ will get students there faster and they will know way more to boot. Just don't teach C++ first.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/44959", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/13125/" ] }
45,033
I guess this is actually a legal question, but it relates to software. I'm about to include a JS plugin in a project. The comments include: Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. * Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. Is using this in my web site "redistribution?" If I minify this to conserve bandwidth, I assume it will strip all comments. If the answer to #1 is yes, doesn't that imply I'm legally not allowed to minify it? (That would stink, since I was planning to auto-minify all JS as part of the deploy process.)
Most minifying software has some method of leaving a comment in-situ for this exact purpose. For example, from the YUI Compressor documentation : + C-style comments starting with /*! are preserved. This is useful with comments containing copyright/license information. For example: /*! * TERMS OF USE - EASING EQUATIONS * Open source under the BSD License. * Copyright 2001 Robert Penner All rights reserved. */ becomes: /* * TERMS OF USE - EASING EQUATIONS * Open source under the BSD License. * Copyright 2001 Robert Penner All rights reserved. */ Google Closure Compiler will preserve any JavaDoc block that has either the @license or the @preserve tag in it: With @license : /* * TERMS OF USE - EASING EQUATIONS * @license Open source under the BSD License. * Copyright 2001 Robert Penner All rights reserved. */ becomes: /* TERMS OF USE - EASING EQUATIONS Open source under the BSD License. Copyright 2001 Robert Penner All rights reserved. */ With @preserve : /* @preserve * TERMS OF USE - EASING EQUATIONS * Open source under the BSD License. * Copyright 2001 Robert Penner All rights reserved. */ becomes: /* TERMS OF USE - EASING EQUATIONS Open source under the BSD License. Copyright 2001 Robert Penner All rights reserved. */
{ "source": [ "https://softwareengineering.stackexchange.com/questions/45033", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/7217/" ] }
45,138
After reading this famous rant by Linus Torvalds , I wondered what actually are all the pitfalls for programmers in C++. I'm explicitly not referring to typos or bad program flow as treated in this question and its answers , but to more high-level errors which are not detected by the compiler and do not result in obvious bugs at first run, complete design errors, things which are improbable in C but are likely to be done in C++ by newcomers who don't understand the full implications of their code. I also welcome answers pointing out a huge performance decrease where it would not usually be expected. An example of what one of my professors once told me about an LR(1) parser generator I wrote: You have used somewhat too many instances of unneeded inheritance and virtuality. Inheritance makes a design much more complicated (and inefficient because of the RTTI (run-time type inference) subsystem), and it should therefore only be used where it makes sense, e.g. for the actions in the parse table. Because you make intensive use of templates, you practically don't need inheritance."
First of all, his rant is really nothing BUT a rant. There's very little actual content here. The only reason it's really famous or even mildly respected is that it was made by the Linux God. His main argument is that C++ is crap and he likes to piss C++ people off. There's of course no reason at all to respond to that and anyone who considers it a reasonable argument is beyond conversation anyway. As to what might be gleamed as his most objective points: STL and Boost are utter crap <- Whatever. STL and Boost cause infinite amounts of pain <- He's purposefully over-exaggerating, but then what is his real statement here? I don't know. There's some more than-trivially difficult to figure out issues when you cause compiler vomit in Spirit or something, but it's no more or less difficult to figure out than debugging UB caused by misuse of C constructs like void*. Abstract models encouraged by C++ are inefficient. <- Like what? He never expands, never provides any examples of what he means, he just says it. Since I can't tell what he's referring to there's little point trying to "rebut" the statement. It is a common mantra but that doesn't make it any more understandable or intelligible. Correct use of C++ means you limit yourself to the C aspects. <- Actually the WORSE C++ code out there does this so I still don't know what he's talking about. There's no intelligible argument being made about anything. To expect a serious rebuttal of such nonsense is just plain silly. I'm getting told to "expand" on a rebuttal of something that I'd be expected to expand upon if it where I who said it. If you really, honestly look at what Torvalds said you'd see that he didn't actually say anything. Just because God says it doesn't mean it makes any sense or should be taken any more seriously than if some random bozo said it. Truth be told, God is just another random bozo. Responding to the actual question: Probably the worst, and most common, bad C++ practice is to treat it like C. Continued use of C API functions like printf , gets (also considered bad in C), strtok , etc... not only fail to leverage the power provided by the tighter type system, they inevitably lead to further complications when trying to interact with "real" C++ code. So basically, do exactly the opposite of what Torvalds is advising. Learn to leverage the STL and Boost to gain further compile-time detection of bugs and to make your life easier in other, general ways (the boost tokenizer for example is both type-safe AND a better interface). It is true that you'll have to learn how to read template errors, which is daunting at first, but (in my experience anyway) it's frankly much easier than trying to debug something that generates undefined behavior during runtime, which the C API makes quite easy to do. Not to say that C is not as good. I of course like C++ better. C programmers like C better. There are trade-offs and subjective likes at play. There's also a lot of misinformation and FUD floating around. I would say that there is more misinformation floating around about C++ but I'm biased in this regard. For example, the "bloat" and "performance" problems C++ supposedly has aren't actually major issues most of the time and are certainly blown out of the proportions of reality. As to the issues your professor is referring to, these are not unique to C++. In OOP (and in generic programming) you want to prefer composition over inheritance. Inheritance is the strongest possible coupling relationship that exists in all OO languages. C++ adds one more that is stronger, friendship. Polymorphic inheritance should be used to represent abstractions and "is-a" relationships, it should never be used for reuse. This is the second largest mistake you can make in C++, and it's a pretty big one, but it's far from unique to the language. You can create overly complex inheritance relationships in C# or Java too, and they'll have exactly the same problems.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/45138", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/10199/" ] }
45,150
It seems to me that the programming industry is in a race to the bottom. If we take the practices of: Not taking time to implement best practices Using other's people code as much as possible (custom code as a liability) Using increasingly higher level languages to improve productivity GUI based development "tools" that greatly simplify "programming" and do not require people to understand the plumbing behind the code These things imply to me that we are in a race to becoming like any other office worker. It is in the employer's interest for things to not require skill (easier to replace), for things to be prebuilt (less project time). My point here is that a) is there a misalignment between skill and the economic interests of the employer? and b) if there is, how do you mitigate it to enforce professional standards?
To the trends you mention I would add one more, which IMHO explains them: There is vastly more programmers (needed) than ever. The amount of tasks which require or include programming is ever increasing, and in an even higher rate than the number of programmers. Nowadays there are several microchips in an average car. In 5 years there may be a chip in your fridge and your toaster. In 10 years, your underwear?... And someone needs to produce all that software to make these work. So there is every possible effort made to automate whatever is automatable, and to improve "productivity" (however it is defined). And more and more fresh brains are recruited. This implies that the majority of today's active programmers are inexperienced and/or ill prepared for their job. It takes several years to get to an adequate level of experience and it takes constant learning to keep yourself there. The bottom line is, more and more of the programming jobs are becoming less and less challenging. But there is still enough challenges for anyone who is looking for them . Let me play the devil's advocate against your points above: Not taking time to implement best practices A lot of people don't, a lot of people do. Tenish years ago when I first discovered unit testing and the agile approach, none of my colleagues had the slightest idea what it was. Nowadays it is almost standard material at universities, so many fresh graduates already understand it. Using other's people code as much as possible (custom code as a liability) As opposed to what? Reinventing the wheel? Or using other people's code to avoid that? I think it is important to note that we are paid (mostly) to solve problems, and writing code is not the end, only the means to that . If a problem can be solved without writing a single line of code, it still makes the client happy. Especially if this way we manage to produce a more reliable solution faster and cheaper. I don't see any problem with that. Using increasingly higher level languages to improve productivity As opposed to coding everything in assembly? ;-) GUI based development "tools" that greatly simplify "programming" and do not require people to understand the plumbing behind the code IMHO any tool can be misused. Which is not to say that GUI builders were necessarily perfect or even good - most (or at least some) of them are usable within their limits. But if someone doesn't know those limits, is it a problem of the tool or its user? In general, I believe (although have no evidence to prove it) that back in the punch card and machine code days, roughly the same proportion of existing code was horrible as now, just both the overall amount of code, and the chances of outsiders ever seeing such code was much much less. Now, with the Internet and the Daily WTF, we get exposed to the worst examples day by day. It's a bit like watching all the news about terrorism and earthquakes and divorcing celebs, and crying out how dangerous and immoral this world became.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/45150", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/5057/" ] }
45,181
I just phone interviewed with a company for a graduate software developer position and was asked the following questions. I should add that the company concerned are not a database vendor. How does a query optimiser work? If a database was performing badly how would you use the performance logs to find out the problem. I have asked whether they ask such questions of all candidate software developers (graduate or experienced) in a first phone interview. They replied that they like to test their candidates knowledge of database development. I want to write to the company to say that these questions are unreasonable to ask at a software developer interview and to request that my interview be done over. I would like to check the reasonableness of the following assumptions a) Those questions cannot be fairly classified as database development questions. b) I think the questions are appropriate for a DBA interview but wholly unreasonable for a software developer interview (experienced or not). c) The first question is only relevant to a database vendor. d) The second question is not fair because software developers typically don't deal with database performance logs as that is the job of the DBA. Perhaps some of you will be kind enough to comment on my assumptions or may have any other suggestions, before I write to the company.
If I were an interviewer (which I sometimes am) and received a letter from a candidate complaining that the questions were unfair and they wanted a do-over, I'd thank my lucky stars that we dodged that bullet and immediately move the application to the "reject" pile. Acting like this only shows you to be a complainer, and not having the "can do" attitude that one looks for. a. the questions were reasonable to ask for the topic of database development. b. false. Anything to do with software development is fair game to be asked. Keep in mind that getting a wrong answer does not automatically disqualify you for the position (or other positions in the company); it may just help to classify you as someone who wouldn't be the best fit for a database-oriented job. c. false. d. false. First of all, there may be no dedicated DBA; second, a software developer must be aware of a broad range of issues which could affect performance (and accuracy), and have at least a high-level understanding of database management. Take this as a lesson that there are things that you don't yet know. Now you know what to study for next time.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/45181", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/-1/" ] }
45,195
The process of doing custom error handling in ASP.NET MVC (3 in this case) seems to be incredibly neglected. I've read through the various questions and answers here, on the web, help pages for various tools (like Elmah), but I feel like I've gone in a complete circle and still don't have the best solution. With your help, perhaps we can set a new standard approach for error handling. I'd like to keep things simple and not over-engineer this. Here are my goals: For Server errors/exceptions: Display debugging information in dev Display friendly error page in production Log errors and email them to administrator in production Return 500 HTTP Status Code For 404 Not Found errors: Display friendly error page Log errors and email them to administrator in production Return 404 HTTP Status Code Is there a way to meet these goals with ASP.NET MVC?
I'll share the way I ended up doing this, that was part of the original question. First, the problems I encountered: With customErrors on (i.e. in production) the global HandleError attribute swallows exceptions and renders your error view, but then you can't log it with an addon tool like elmah, since elmah never sees it. You could log it in your view I suppose, but it's a view, that seems wrong. The global HandleError attribute appears new in the MVC 3 RTM Visual Studio project template. customErrors with urls for MVC endpoints returns 302 status codes. There is the redirectmode property, but you cannot match mvc urls in customErrors and use the ResponseRewrite mode. ( https://stackoverflow.com/questions/781861/customerrors-does-not-work-when-setting-redirectmode-responserewrite/3770265#3770265 ) Avoiding customErrors completely and handling everything custom in your app leads to a lot of complexity, IMO. (Iloved this: https://stackoverflow.com/questions/619895/how-can-i-properly-handle-404s-in-asp-net-mvc/2577095#2577095 , but it wasn't right for our project) My solution I've taken MVC out of the equation completely. I've removed HandleErrorAttribute global filter in global.asax and focus entirely on the customErrors configuration, shifting it to use WebForm redirects and change to redirectmode to ResponseRewrite in order to avoid the 302 HTTP response codes. <customErrors mode="On" defaultRedirect="/Error.aspx" redirectMode="ResponseRewrite"> <error statusCode="404" redirect="/NotFound.aspx" /> </customErrors> Then, in NotFound.aspx page_load event, set the Response.StatusCode to 404 and in Error.aspx set the code 500. Results: The goals for both have been achieved with the Elmah logs, the friendly error page, and the status code with one line of code on the code-behinds. We're not doing it the "MVC Way" as the earlier solution does, but I'm OK with that if it's two lines of code.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/45195", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/16331/" ] }
45,231
As a Java/C#/C++ programmer I hear a lot of talk about functional languages, but have never found a need to learn one. I've also heard that the higher level of thinking introduced in functional languages makes you a better OOP/procedural language programmer. Can anyone confirm this? In what ways does it improve your programming skills? What is a good choice of language to learn with the goal of improving skills in a less sophisticated language?
I basically agree with FrustratedWithFormsDesign's answer , but you also asked how learning the new paradigm helps develop one's skills. I can give a couple of examples from my own experience. Since learning functional programming, I'm much more conscious about which concepts I work with are more naturally considered as "objects" (generally where mutation makes sense) and which are more naturally considered as immutable "values" (I think there's an important distinction, touching on where OO makes sense vs. when FP makes sense, but that's just my opinion). I notice where my code includes side effects, and I'm more careful to isolate those places, making more of my functions "pure" functions. This greatly improves the testability of my OO code. I'm more conscious about cycles in my data representation. (For example, I don't think you can write a function to convert a linked-list into a doubly-linked list in Haskell, so you do notice cycles quite a bit more in that language.) Avoiding cycles reduces the amount of synchronization you need to perform for your data structures to be internally consistent, easing the burden in sharing these structures between threads. I'm more likely to rely on recursion (scheme's recursive looping constructs are things of beauty). Dijkstra touched on the importance of this in Notes on Structured Programming - recursive algorithms map very directly to mathematical induction, which he suggests is the only means to intellectually prove our loops correct. (I don't suggest that we must prove our code correct, but that the easier we make it for ourselves to do so, the more likely it is that our code is correct.) I'm more likely to use higher-order functions. John Hughes' paper, Why Functional Programming Matters . It emphasizes the composability you get from employing functional programming techniques, higher-order functions playing a major role. Also, as touched on in Jetti's answer , you'll find that a lot of FP ideas are being incorporated into newer OO languages. Ruby and Python both provide many higher-order functions, I've heard LINQ described as an attempt to bring support for monadic comprehensions into C#, even C++ now has lambda expressions.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/45231", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/16147/" ] }
45,378
Practically every text on code quality I've read agrees that commented out code is a bad thing. The usual example is that someone changed a line of code and left the old line there as a comment, apparently to confuse people who read the code later on. Of course, that's a bad thing. But I often find myself leaving commented out code in another situation: I write a computational-geometry or image processing algorithm. To understand this kind of code, and to find potential bugs in it, it's often very helpful to display intermediate results (e.g. draw a set of points to the screen or save a bitmap file). Looking at these values in the debugger usually means looking at a wall of numbers (coordinates, raw pixel values). Not very helpful. Writing a debugger visualizer every time would be overkill. I don't want to leave the visualization code in the final product (it hurts performance, and usually just confuses the end user), but I don't want to lose it, either. In C++, I can use #ifdef to conditionally compile that code, but I don't see much difference between this: /* // Debug Visualization: draw set of found interest points for (int i=0; i<count; i++) DrawBox(pts[i].X, pts[i].Y, 5,5); */ and this: #ifdef DEBUG_VISUALIZATION_DRAW_INTEREST_POINTS for (int i=0; i<count; i++) DrawBox(pts[i].X, pts[i].Y, 5,5); #endif So, most of the time, I just leave the visualization code commented out, with a comment saying what is being visualized. When I read the code a year later, I'm usually happy I can just uncomment the visualization code and literally "see what's going on". Should I feel bad about that? Why? Is there a superior solution? Update : S. Lott asks in a comment Are you somehow "over-generalizing" all commented code to include debugging as well as senseless, obsolete code? Why are you making that overly-generalized conclusion? I recently read Robert Martins "Clean Code", which says: Few practices are as odious as commenting-out code. Don't do this!. I've looked at the paragraph in the book again (p. 68), there's no qualification, no distinction made between different reasons for commenting out code. So I wondered if this rule is over-generalizing (or if I misunderstood the book) or if what I do is bad practice, for some reason I didn't know.
The benefit of #ifdef's as opposed to commenting it out, is that (on large projects) you can have the defines listed in a make or config file - and so don't have to manually go an uncomment things, build, and then re-comment them if it's in many places. The downside to this is that changing the project's DEFINE's will usually mean rebuiling the whole thing, not just changed files. Though... I think the "commented out code is a bad thing" really refers to dead code that people just didn't want to delete for whatever reason (fear of throwing away something they've spent time on perhaps?). It's not really about the situation you have going for you.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/45378", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/14237/" ] }
45,425
I stumbled upon this quote from Bill Gates: "You should definitely send me a resume if you can read the whole thing." He was talking about The Art of Programming books. So I was pretty curious and want to read it all. But honestly, I don't understand it. I'm really not that intellectual. So this should be the reason why I can't understand it, but I am eager to learn. I'm currently reading Volume 1 about fundamental algorithms. Are there any books out there that are friendly for novices/slow people like me, which would help to build up my knowledge so that I can read Knuth's book with ease in the future?
Even I think Knuth's book is a bit advanced and difficult to understand. Those books are definitely for research level algorithmists IMHO. So are there any books out there that are friendly for novices/slow people like me? Introduction to Algorithms by CLRS is much simpler. EDIT : Still if you want to read Knuth's book you should first go through Concrete Mathematics . Knuth wants his students to be aware of the basic mathematical portion of algorithms analysis.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/45425", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/16381/" ] }
45,479
My team uses Team Foundation Server for source control, and today I fixed some bugs and smoke test application before I checked it in but I forgot to comment some code. (This code made the UI a little bit strange.) I want to know what good practices there are before checking code in - I don't want to make this kind of mistake again.
One thing I have gotten in the habit of doing is always looking at the diffs of every file I'm about to check in, right before I check them in.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/45479", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/12791/" ] }
45,643
I was recently hired in a big company (thousands of people, to give an idea of the size). They said they hired me because of my rigor and because I was, despite my youngness (i'm 25), experienced as a C/C++ programer. Now that I'm in, I can see that the whole system is old and often uses obsolete technologies. There is no naming convention (files, functions, variables, ...), they don't use Version Control, don't use exceptions or polymorphism and it seems like almost everybody lost his passion (some of them are only 30 years old). I'd like to suggest somes changes but i don't want to be "the new guy that wants to change everything just because he doesn't want to fit in". I tried to "fit in", but actually, It takes me one week to do what I would do in one afternoon, just because of the poor tools we're forced to use. A lot my collegues never look at the new "things" and techniques that people use nowadays. It's like they just given up. The situation is really frustrating. Have you ever been in a similar situation and, if so, what advices would you give me ? Is there a subtle way of changing things without becoming the black sheep here ? Or should I just give up my passion and energy as well ? Thank you. Updates Following your precious advices I was able to suggest changes and am now in charge of the team that must create and deploy Subversion :D Thanks to all of you ! 6 months later I quit and found a much more interesting environment, with a much better pay, and more interesting challenges. I wouldn't go back for anything.
I was in a similar situation at my previous company, where I was at for 5 years. When I joined in 2004, they were: still using Microsoft Access for their databases (even business critical ones) using Visual Basic 6 or Access/Excel VBA for development using a lot of third-parties instead of using development resource in-house (business managers led their own development projects and 90% of the time put contracts out to tender without IT's knowledge) gasp no version control. When I left last year, the company were: using .NET and C# exclusively had banished all Access development using SVN for version control had 2 beefy SQL Server boxes and were migrating existing Access databases to SQL all development came through the in-house teams and only went out to tender if resource was limited At the time I had not long turned 21, and the next youngest in the development team was 30. I didn't do it all myself. The IT manager had joined the company at the same time, and wanted to bring all development through IT. SVN was my first achievement. I had a meeting with my line manager, and highlighted a couple of situations where code had been put live or changed that had caused problems, and highlighted the fact there was no accountability - he couldn't blame anyone, basically - and after this he started to listen. I then put a presentation together to the team and explained the concept of version control, and demo'd a couple of situations where SVN could help us developers out. The younger ones took to it like a duck to water, the older ones not so much but they tried and didn't complain about those that did use it. Another major achievement was bringing a complete system in-house - I spear-headed a project that saved the company £120k a year in licensing. I spent about 2 months of my spare time writing a new system, and presented it to the IT manager, and explained the cost saving. He then allowed me to present it to the business, and explained how we could implement whatever they liked into the system - no more being restricted to "off-the-shelf" systems. 4 weeks later my system was in pilot in 10 locations, and 6 months later it went live. A year later they cancelled the third-party contract, removed all traces of it from the network, and came to us for a large enhancement requirement to our in-house system. My advice to you: if you care about the company, stick it out. If others dislike your approach, let them take it up with you - it's all about compromise Tailor suggestions to the person you're talking to - managers like to hear about how they can a) save money, b) accurately blame people when things wrong, but developers like to hear how they can a) save time, b) stick up for themselves, for example if you're passionate about change (which it sounds like you are) then show people your enthusiasm and don't get disheartened when they're less than enthusiastic Don't talk about making changes. Make them. When you start churning out fantastic work in less time than the more experienced guys, people will start asking "why?"
{ "source": [ "https://softwareengineering.stackexchange.com/questions/45643", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/13384/" ] }
45,776
We have to admit that programming is much more difficult than creating documentation or even creating Gantt chart and asking progress to programmers. So for us that are naives, knowing that programming is generally more difficult, why do business analysts and project managers get higher salary than programmers? What is it that makes their job a high paying job when even at most times programmers are the ones that go home late? UPDATE Excuse my ignorance, from some of the response it seems that the reason why BAs and PMs gets higher salary because they are the ones that usually responsible for the mess programmers make. But at the end of the day, it is programmers that get their hands dirty to fix the mess and work harder. So it still does not make sense.
Whether project managers get higher salaries than programmers and business analysts at all exist as a class depends squarely on the software world you live in. A simple answer to this question would be "because in our societies, we still think the salary is bound to the position in the hierarchy." But this answer whilst reflecting the fact that people are paid based on their perceived value doesn't explain why PM and BA are on top of the hierarchy in many software organisations and why the management goes for hierarchy in the first place as a structure of choice for software project team. These are the two questions that seems to be really worthy asking. Broadly speaking there are two categories of software making organisations. I will call them Widget Factories and Film Crews. Widget Factories are born out of management school of thought revolving around motivation Theory X proposed by McGregor: rank employees are lazy and require constant control and supervision, jobs are held in the name of a pay check, managers are always able to do their subordinates' jobs to the higher or, at least, same standard. This thinking lands to a natural idea that the entire team can easily be replaced with and represented by the manager alone - after all everyone else on the team is either easilly replacable or there just to enhance manager's ability to complete tasks. Hence the hierarchy as a structure and rather horizontal job roles. Widget Factory management operates on the assumption that software can be manufactured out of a specification prepared by a business analyst through a clearly defined process run under the close supervision of a project manager. The manufacturing is taken care of by staffing the project with enough qualified yet interchangeable programming and testing resources. Work is driven by a prearranged budget based on the initial business case prepared by PM and BA. Management that runs a Widget Factory is easy to spot just by paying attention to the way these people talk. They are likely to be on about resources (including when referring to team members), processes, operating efficiency, uniformity, repeatability, strict control over use of resources, clear-cut job roles and defined process inputs and outputs. They'd casually mention the actual factory metaphor when trying to convey the image of the ideal software development operation as they see it. Then there are Film Crews. They are based on the notion that people are intelligent, self-motivated, work really hard and enjoy their jobs as much as kids enjoy playing. Film Crews recognise that due to specialisation individual contributor abilities may by far surpass the abilities of people organising, co-ordinating and directing the work. Since manager can no longer substitute for everyone the hierarchical structure just doesn't work that well - people have to co-operate within a much flatter and complex formation to get things done. Jobs roles themselves tend to be much more vertical - start to finish - and involve a broader variety of skills. This management thinking is underpinned by McGregor's Theory Y . A director of a Film Crew knows that her vision for a piece of software can only come true should she be able to assemble a great crew, fascinate the imaginations and help the team to gel and work together. Her role is to inspire, guard the vision, provide direction and focus the efforts. Every single person matters because "director" believes that software results from combination of worldviews and abilities of all participants and a unique way the group carries out the work together. Everyone recognises from the onset the importance of getting the stars to join the crew – star performers increase every chance for success. Vision drives budget and attracts funding. When it comes to compensation Widget Factories deem that the most value is derived from the work done by project manager and business analyst who reside on the top of the hierarchy and have to be compensated accordingly, the rest of the team doesn’t matter that much as long as they’ve got the right qualifications to convert requirements into working code. PM and BA work hard to maintain their position on top of the pack by restricting free access to the sources of project information to the rest of the team. Without formal access to the primary info sources the team struggles to make any value judgements or come up with good solutions, programmers are relegated to taking orders from above and working on the problem as defined by PM and BA. This situation further reinforces the Widget Factory notion that programmers are akin to factory shop floor workers only capable of mechanically carrying out though technically complicated, but nonetheless standard tasks. In a stark contrast Film Crew acts as a more egalitarian formation; members are given unrestricted access to primary information, encouraged to form value judgements and are free to select a course of actions to fulfil and contribute to the vision. Leadership structure is based on ability rather than a specific role within the team. Compensation reflects how desirable getting a specific person to take part in the project, it often tied to the perception of how much more valuable the end result will become if that person can be convinced to devote their energy to creating that piece of software. In this environment the role of a project manager becomes less prominent as he is unlikely to be the creative leader; the role comes down mostly to administrative support and external relations. Business analyst’s duties are partly replaced by the role of visionary (I called her earlier “a director”) and partly absorbed by other team members. Now, it won’t come as a surprise that most in-house software development teams and some consultancies are run as Widget Factories relying on a process to produce consistently boring software; it is these environments where project managers and business analysts are routinely paid more than programmers based on the assumption that they bring the most value with the environment structured accordingly making it difficult for programmers to prove the management wrong. Successful software companies tend to adopt Film Crew viewpoint, any other philosophy would hinder their ability to attract great people that they rely on so much to produce great software. It’s unlikely you’d ever see a business analyst role in that setting and project managers are less prominent and routinely get paid less than great programmers.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/45776", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/9298/" ] }
45,856
I'm a student and in my spare time I'm working for a big enterprise as Java developer. The job is good, but the problem is, my boss writes very strange code. I don't want to complain, but some issues are in my opinion really strange. For example: he doesn't know any booleans. All boolean conditions are Strings called "YesOrNo" and then in the condition he uses if (YesOrNo == "Yes") there are a lot of very strange characters in method names and variables like é õ ô or è all loops are infinite loops in the style of for(;;). Then at the end of the loop the condition is tested and if the conditions is fulfilled break; is called. I don't know if I should tell him that I think this isn't a good practice, since he is my boss and decides how and what to do. On the other hand some of his examples are really very weird. Any hints how to cope with? And is this only me who thinks that's bad style?
Ask him to explain his code to you Tell him you've never seen X programmed that way before, and ask him why he codes it that way. Show him the way you code it, and tell why you do it that way (best practices, better performance, less chance of errors, easier for other programmers to read/maintain, etc). Be sure to prepare all your arguments in advance, and focus on why your method is best instead of why his method is worst. Afterwards, see if he still supports his method over yours. If he is open to improvement, he will likely change his way of coding. If he still prefers to use his style of coding over yours, you are not likely to change his opinion.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/45856", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/13137/" ] }
45,957
I have recently been doing a bunch of web side projects through word of mouth recommendations only. Although I am much more a of a programmer than a designer by any means, my design skills are not terrible, and do not hate dealing with UI like many programmers. As a result, I find myself lured into a bunch of side projects where aside from a minimal back end for content administration, most of the programming is on front end interfaces (read javascript/css). By far the biggest frustration I have had is convincing clients that they do not want Flash. Aside the fact that I really do not enjoy Flash "development", there are many practical reasons why Flash is not desirable (lack of compatibility across devices, decreased client accessibility, plug-in requirements, increased development time, etc.). Instead of just flat out telling the clients "I will not build you a flash website", I would much rather use tactics to convince/explain to them that this is not what they actually want, ie: meet their requirements any better than standard html/css/js and distract users from their content. What kind of first hand experience do others have with this? How do you explain to someone that javascript/css/AJAX is usually a better option for most websites? Why do people want to use Flash so bad to begin with? This question pertains to clients who do not have any technical reasons for wanting flash, but just want it because they think it makes pretty websites.
Tell 'em Flash websites are "empty" to the search engines. If the businessman wants customers to google and discover his business through the web, he has to forget Flash. Technobabble aside, the businessman will understand the cost of losing customers. Tell 'em Flash websites are known to slow down old computers and users nowadays are getting increasingly annoyed by Flash websites, closing the page if it doesn't load in a blink of an eye. Tell 'em Flash has become sort of an anti-businesscard these days to mark an antiquated company out of touch with the present. Tell 'em users will wrinkle and the competitors will laugh. A true story. A while ago when I relocated to another town soon after I felt the need to visit a hair stylist. Being who I am I came with no better idea as to google for a hairdresser shop. I landed on a rating page that listed about 5-6 top places. I went for their websites and saw... what do you think? Freaking Flash! One site wouldn't load 100% at all, I even tried several times. The others had too complex navigation I was never able to comprehend and get to the needed information. In the end I landed on the last page which was just basic HTML and CSS. I got the necessary information in a few seconds, made an appointment and have been their client ever since. I guess the other shops will have to earn with their web designer since normal users just don't get through.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/45957", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/11099/" ] }
46,084
I am at secondary school right now and I'm the only one in my class who is experienced with programming. Because of that, people are constantly distracting me while I'm writing code to ask me to solve a problem. Usually I reply with something like 'I don't know, I never use that' but I don't want to lie to people. Another problem is that I became so well known for this that even students from other classes are asking me questions. I find this damn annoying. Thirdly, if I solve a problem for them they don't learn anything from it. How can I stop people from asking me programming-related questions in a kind way?
Wear headphones. Common trick used by undergraduate TAs who needed to use the same computer labs as their students at my school. They don't even need to be plugged into anything. This won't discourage everyone, but should cut down on the numbers quite a bit. Post a sign on your textbooks / notebook, and put it in your email signature that you don't have time to answer questions due to your own intense studies. Start a tutoring business, and explain that you charge X dollars an hour and schedule meetings ahead of time. This won't end the problem entirely, but it will help people value your time and will give you some spending money.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/46084", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/-1/" ] }
46,252
If you just consider the second part of my question, "Why a developer should not be interrupted while neck-deep in coding", that has been discussed a number of times by smart people. Heck, even the co-founder of SO, Joel Spolsky, wrote a blog post about "getting in the zone" and "being knocked out of the zone" and why it takes an average of 15 minutes to achieve productivity when participating in complex, software development related tasks. So I think the why has been established. What I'm interested in is how to explain all that to somebody who doesn't know beans about Beans (khmm I mean software development). How to tell the wife, or the funny guy from accounting at the workplace, or the long time friend who pings you on Skype every 30 minutes with a "Wazzzzzzup?!", that all the interruptions have a much deeper impact on your work than the obvious 30 seconds they took from your time. Obviously you can't explain it by sentences like "I have to juggle a lot of variable names in my short term memory" unless you want to be the target of blank stares or friendly abuse. I'd like to be able to explain all that to non-developers in a way that will make them clearly understand - without being offensive, elitist or too technical. EDIT: Thanks to everyone for their great insights. I've accepted EpsilonVector's answer as his analogy was the closest one to my original needs. The "falling asleep" explanation is neither offensive nor technical, almost anyone can relate to it, and the consequences of getting disturbed while falling asleep or while being in the zone are very similar: you experience frustration and you "lose" 15-20 minutes of time.
Try the following analogies: First one: "How long does it take for you to fall asleep?" "X minutes" "Now imagine that when you are close to falling asleep, someone walks in and interrupts you, how long will it take you to fall asleep now? Those few seconds you had left, or will you have to start again to 'sink back' to where you were?" "I'll have to start again" "Great. Same thing. Just like falling asleep, it takes me a while to 'sink' into focus mode, and it takes me a while to get back to it once I'm interrupted, except that I also forget half of what I was doing." Second one: "You know how when you're reading a book you 'sink into it'- after a while you don't even notice the words anymore, and you block out everything around yourself, and are totally immersed in the mental images you see." "Yes." "How long does it take for you to get there?" "About X minutes" "Now imagine that when you are that immersed in the book someone walks in and interrupts you, how long will it take you to get back to that? Will it happen immediately, or will you have to start again to 'sink back' to where you were?" "I'll have to start again" "Great. Same thing. Just like with reading, it takes me a while to 'sink' into focus mode, it's just as annoying when someone breaks me out of it, and it takes me just as long to get back to it once I'm interrupted, except that I also forget half of what I read."
{ "source": [ "https://softwareengineering.stackexchange.com/questions/46252", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/16709/" ] }
46,366
I'm currently a senior research software engineer at a large company and am being offered a "senior staff engineer" position somewhere else. I am not sure if the new position's title conveys a sideways move or an advancement. So, all other things being roughly equal (salary, domain of expertise, etc.), what is the external difference between these software engineer titles (in general and regardless of any particular company, if possible): senior engineer senior research engineer senior staff engineer member of technical staff principal engineer Edit: Let me elaborate on "member of technical staff" since it's kind of uncommon. I think it's a high title, commonly associated with research. I know that Oracle, VMWare, and the old Bell Labs have these titles. See: Member of Technical Staff . I know what it means, but I don't know how it stacks up against the other titles, which is why I asked.
"So all things being equal" They're not. These titles are not equivalent. I would rank them like this, highest to lowest: Principal Engineer Senior Staff Engineer Staff Engineer Senior Engineer / Senior Research Engineer In general, "senior" implies depth of experience and maturity to work independently with less direct guidance in day to day activities. An engineer can expect to receive assignments or tasks and external prioritization. A Senior Engineer should expect to identify and prioritize such tasks for themselves. A Senior Engineer is typically someone with deep knowledge of a technology or product line and experience with multiple release cycles. A Senior Research Engineer sounds like someone who is not as involved in production cycles but is more focused on algorithms or long term strategic work. "Member of the Technical Staff" does not imply any seniority or programming experience. A receptionist can be a Member of the Technical Staff. A Staff Engineer typically has deep experience with and contributes to multiple technologies and product lines across a company. A Senior Staff Engineer does all the staff engineer stuff, plus works more in a leadership role across multiple product lines or technologies. Senior staff should also be thinking ahead for strategic planning and execution. A Principal Engineer is often the top of the technical ladder in many companies, or just short of "Technical Fellow" or "Chief Scientist". Principals are also called architects in various fashions. Principal Engineers are responsible for macro scale architecture of a software technology or product line, and providing guidance and oversight to multiple development teams working on different products or technologies to ensure that the technologies interoperate or connect to each other appropriately. These are my opinions not as an HR manager but as an engineer who as worked in (and helped define) all of these roles.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/46366", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/8402/" ] }
46,434
Why does it seem so easy to pirate today? It just seems a little hard to believe that with all of our technological advances and the billions of dollars spent on engineering the most unbelievable and mind-blowing software, we still have no other means of protecting against piracy than a "serial number/activation key". I'm sure a ton of money, maybe even billions, went into creating Windows 7 or Office and even Snow Leopard, yet I can get it for free in less than 20 minutes. Same for all of Adobe's products, which are probably the easiest. Can there exist a fool-proof and hack-proof method of protecting your software against piracy? If not realistically, how about theoretically possible? Or no matter what mechanisms these companies deploy, can hackers always find a way around it?
Code is data. When the code is runnable, a copy of that data is un-protected code. Unprotected code can be copied. Peppering the code with anti-piracy checks makes it slightly harder, but hackers will just use a debugger and remove them. Inserting no-ops instead of calls to "check_license" is pretty easy. Hard-to-hack programs do progressively more annoying things. But vendors have to sell customers software they are prepared to use. Not everyone allows computers to phone home. Some people working on sensitive stuff refuse to connect machines to the internet. Programs I sell at my current employer (aerospace tools) don't phone home ever . The customers wouldn't tolerate phoning home for "activation" every time the program starts. Worst case, the program runs in a VM with no networking, where it's always a fixed date. So it might have been legitimately installed once, but no efforts on the part of the developers can have it tell that it's not how it was. Attempts to add hardware "copy prevention" to general purpose computers are doomed to failure. Whatever company sells hardware without copy prevention ends up selling all the hardware. Vendors like Dell and Intel progressively try to introduce spy-hardware like Palladium, but they are strongly resisted. When the computer is doing something scientific, real-time, any interruptions to "check for pirated content" will cause failures. If all computers had hardware DRM, the special scientific/realtime ones would have to not have it. Accidentally everyone would buy special scientific/realtime ones. Hardware DRM checks will have false positives on some kinds of content. Simplest case: resolution. I record Quad HD video from my camera array (sitting on my desk right now). Windows DRM gets between me and the data because it's QuadHD. Signature analysis: The Hardware DRM is small and has a relatively fixed data set. It also has to use the same data bus as the CPU so it slows things down intermittently. This ruins anything realtime. So then to make the Hardware DRM smarter during a false positive your computer will eventually get interrupted to go and check using a web service. Now my science data processor either fails because it isn't networked, or stops streaming data.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/46434", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/7631/" ] }
46,592
Reportedly, Alan Kay is the inventor of the term "object oriented". And he is often quoted as having said that what we call OO today is not what he meant. For example, I just found this on Google: I made up the term 'object-oriented', and I can tell you I didn't have C++ in mind -- Alan Kay, OOPSLA '97 I vaguely remember hearing something pretty insightful about what he did mean. Something along the lines of "message passing". Do you know what he meant? Can you fill in more details of what he meant and how it differs from today's common OO? Please share some references if you have any.
TL;DR - Alan Kay wrote in 2003 that: OOP to me means only messaging, local retention and protection and hiding of state-process, and extreme late-binding of all things. It can be done in Smalltalk and in LISP. There are possibly other systems in which this is possible, but I'm not aware of them. http://www.purl.org/stefan_ram/pub/doc_kay_oop_en ------- Full source, for context: Date: Wed, 23 Jul 2003 09:33:31 -0800 To: Stefan Ram [removed for privacy] From: Alan Kay [removed for privacy] Subject: Re: Clarification of "object-oriented" Hi Stefan -- Sorry for the delay but I was on vacation. At 6:27 PM +0200 7/17/03, Stefan Ram wrote: Dear Dr. Kay, I would like to have some authoritative word on the term "object-oriented programming" for my tutorial page on the subject. The only two sources I consider to be "authoritative" are the International Standards Organization, which defines "object-oriented" in "ISO/IEC 2382-15", and you, because, as they say, you have coined that term. I'm pretty sure I did. Unfortunately, it is difficult to find a web page or source with your definition or description of that term. There are several reports about what you might have said in this regard (like "inheritance, polymorphism and encapsulation"), but these are not first-hand sources. I am also aware that later you put more emphasis on "messaging" - but I still would like to know about "object oriented". For the records, my tutorial page, and further distribution and publication could you please explain: When and where was the term "object-oriented" used first? At Utah sometime after Nov 66 when, influenced by Sketchpad, Simula, the design for the ARPAnet, the Burroughs B5000, and my background in Biology and Mathematics, I thought of an architecture for programming. It was probably in 1967 when someone asked me what I was doing, and I said: "It's object-oriented programming". The original conception of it had the following parts. I thought of objects being like biological cells and/or individual computers on a network, only able to communicate with messages (so messaging came at the very beginning -- it took a while to see how to do messaging in a programming language efficiently enough to be useful). I wanted to get rid of data. The B5000 almost did this via its almost unbelievable HW architecture. I realized that the cell/whole-computer metaphor would get rid of data, and that "<-" would be just another message token (it took me quite a while to think this out because I really thought of all these symbols as names for functions and procedures. My math background made me realize that each object could have several algebras associated with it, and there could be families of these, and that these would be very very useful. The term "polymorphism" was imposed much later (I think by Peter Wegner) and it isn't quite valid, since it really comes from the nomenclature of functions, and I wanted quite a bit more than functions. I made up a term "genericity" for dealing with generic behaviors in a quasi-algebraic form. I didn't like the way Simula I or Simula 67 did inheritance (though I thought Nygaard and Dahl were just tremendous thinkers and designers). So I decided to leave out inheritance as a built-in feature until I understood it better. My original experiments with this architecture were done using a model I adapted from van Wijngaarten's and Wirth's "Generalization of Algol" and Wirth's Euler. Both of these were rather LISP-like but with a more conventional readable syntax. I didn't understand the monster LISP idea of tangible metalanguage then, but got kind of close with ideas about extensible languages draw from various sources, including Irons' IMP. The second phase of this was to finally understand LISP and then using this understanding to make much nicer and smaller and more powerful and more late bound understructures. Dave Fisher's thesis was done in "McCarthy" style and his ideas about extensible control structures were very helpful. Another big influence at this time was Carl Hewitt's PLANNER (which has never gotten the recognition it deserves, given how well and how earlier it was able to anticipate Prolog). The original Smalltalk at Xerox PARC came out of the above. The subsequent Smalltalk's are complained about in the end of the History chapter: they backslid towards Simula and did not replace the extension mechanisms with safer ones that were anywhere near as useful. What does "object-oriented [programming]" mean to you? (No tutorial-like introduction is needed, just a short explanation [like "programming with inheritance, polymorphism and encapsulation"] in terms of other concepts for a reader familiar with them, if possible. Also, it is not neccessary to explain "object", because I already have sources with your explanation of "object" from "Early History of Smalltalk".) (I'm not against types, but I don't know of any type systems that aren't a complete pain, so I still like dynamic typing.) OOP to me means only messaging, local retention and protection and hiding of state-process, and extreme late-binding of all things. It can be done in Smalltalk and in LISP. There are possibly other systems in which this is possible, but I'm not aware of them. [Also,] One of the things I should have mentioned is that there were two main paths that were catalysed by Simula. The early one (just by accident) was the bio/net non-data-procedure route that I took. The other one, which came a little later as an object of study was abstract data types, and this got much more play. If we look at the whole history, we see that the proto-OOP stuff started with ADT, had a little fork towards what I called "objects" -- that led to Smalltalk, etc.,-- but after the little fork, the CS establishment pretty much did ADT and wanted to stick with the data-procedure paradigm. Historically, it's worth looking at the USAF Burroughs 220 file system (that I described in the Smalltalk history), the early work of Doug Ross at MIT (AED and earlier) in which he advocated embedding procedure pointers in data structures, Sketchpad (which had full polymorphism -- where e.g. the same offset in its data structure meant "display" and there would be a pointer to the appropriate routine for the type of object that structure represented, etc., and the Burroughs B5000, whose program reference tables were true "big objects" and contained pointers to both "data" and "procedures" but could often do the right thing if it was trying to go after data and found a procedure pointer. And the very first problems I solved with my early Utah stuff was the "disappearing of data" using only methods and objects. At the end of the 60s (I think) Bob Balzer wrote a pretty nifty paper called "Dataless Programming", and shortly thereafter John Reynolds wrote an equally nifty paper "Gedanken" (in 1970 I think) in which he showed that using the lamda expressions the right way would allow data to be abstracted by procedures. The people who liked objects as non-data were smaller in number, and included myself, Carl Hewitt, Dave Reed and a few others -- pretty much all of this group were from the ARPA community and were involved in one way or another with the design of ARPAnet → Internet in which the basic unit of computation was a whole computer. But just to show how stubbornly an idea can hang on, all through the seventies and eighties, there were many people who tried to get by with "Remote Procedure Call" instead of thinking about objects and messages. Sic transit gloria mundi. Cheers, Alan Kay
{ "source": [ "https://softwareengineering.stackexchange.com/questions/46592", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/12260/" ] }
46,637
I have to extend an existing module of a project. I don't like the way it has been done (lots of anti-pattern involved, like copy/pasted code). I don't want to perform a complete refactor for many reasons. Should I: create new methods using existing convention, even if I feel it wrong, to avoid confusion for the next maintainer and being consistent with the code base? or try to use what I feel better even if it is introducing another pattern in the code ? Precison edited after first answers: The existing code is not a mess. It is easy to follow and understand. BUT it is introducing lots of boilerplate code that can be avoided with good design (resulting code might become harder to follow then). In my current case it's a good old JDBC (spring template inboard) DAO module, but I have already encounter this dilemma and I'm seeking for other dev feedback. I don't want to refactor because I don't have time. And even with time it will be hard to justify that a whole perfectly working module needs refactoring. Refactoring cost will be heavier than its benefits. Remember: code is not messy or over-complex. I can not extract few methods there and introduce an abstract class here. It is more a flaw in the design (result of extreme 'Keep It Stupid Simple' I think) So the question can also be asked like that: You, as developer, do you prefer to maintain easy stupid boring code OR to have some helpers that will do the stupid boring code at your place ? Downside of the last possibility being that you'll have to learn some stuff and maybe you will have to maintain the easy stupid boring code too until a full refactoring is done)
Refactoring is best done in small steps, and preferably only if you have unit tests to cover the code. (So if you don't have tests yet, strive to write them first, and until then, stick to the simplest, most foolproof, preferably automated refactorings. A great help in this is Working Effectively with Legacy Code by Michael Feathers.) In general, aim to improve the code a little whenever you touch it. Follow the Boy Scout Rule ( coined by Robert C. Martin ) by leaving the code cleaner than you found it. When you add new code, try to keep it separated from the existing bad code. E.g. don't bury it into the middle of a long method, instead add a call to a separate method and put your new code in there. This way, you grow gradually bigger islands of clean(er) code within the existing codebase. Update Refactoring cost will be heavier than its benefits. [...] You, as developer, do you prefer to maintain easy stupid boring code OR to have some helpers that will do the stupid boring code at your place ? I emphasized which I believe is the key point here. It is always worth assessing the costs and benefits of refactoring before we jump into it. As in your case, most of us have limited resources for refactoring so we must use them wisely. Spend that precious little time on refactoring where it brings the most benefits with the least effort. As a creative mind, of course I would prefer producing perfect, beautiful and elegant code, and rewriting everything which does not resemble my ideals :-) In reality though, I am paid to produce software which solves real problems for its users, so I should think about producing the most value for their money over the long term. The benefit of refactoring only appears if there is sufficient savings in time and efforts to understand, maintain, fix and extend the code in the long term . So if a piece of code - however ugly it is - is rarely or never touched, there are no known bugs in it and I don't know of any upcoming features in the foreseeable future which would require me to touch it, I prefer leaving it in peace.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/46637", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/4989/" ] }
46,707
I'm quite new to programming design patterns and life cycles and I was wondering, what should come first, code review or testing, regarding that those are done by separate people? From the one side, why bother reviewing code if nobody checked if it even works? From the other, some errors can be found early, if you do the review before testing. Which approach is recommended and why?
Developer unit testing first, then code review, then QA testing is how I do it. Sometimes the code review happens before the unit testing but usually only when the code reviewer is really swamped and that's the only time he or she can do it.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/46707", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/13218/" ] }
46,821
I have tried to search for this answer for quite some time and I have gone through all the various FAQ's and documentation regarding the three licenses; but none of them have been able to answer a question that I have. So I've been working an idea for a website for sometime now and recently I found open source software that has many of components that are similar. It is licensed under the mpl/gpl/lgpl licenses. I think for the most part I understand the ramifications, due to the searches and reading, of what is required if I modify/use and want to distribute the software. But what if I want to modify and not distribute, but use it on a public website that I generate ad revenue from? Is this illegal? It doesn't seem like it is from reading other open source system, say like Drupal, where they allow you to use the software but it's not considered "distribution" if people just go to the website. I know this site may not be the best resource and I've tried some other sites, but I haven't received any clear replies back. If you know some other resource that I could contact also, please let me know. Links for those who don't know: MPL - Wikipedia , Legalese GPL - Wikipedia , Legalese LGPL - Wikipedia , Legalese
Developer unit testing first, then code review, then QA testing is how I do it. Sometimes the code review happens before the unit testing but usually only when the code reviewer is really swamped and that's the only time he or she can do it.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/46821", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/16900/" ] }
46,894
Is there nowadays any case for brevity over clarity with method names? Tonight I came across the Python method repr() which seems like a bad name for a method to me. It's not an English word. It apparently is an abbreviation of 'representation' and even if you can deduce that, it still doesn't tell you what the method does. A good method name is subjective to a certain degree, but I had assumed that modern best practices agreed that names should be at least full words and descriptive enough to reveal enough about the method that you would easily find one when looking for it. Method names made from words help let your code read like English. repr() seems to have no advantages as a name other than being short and IDE auto-complete makes this a non-issue. An additional reason given in an answer is that python names are brief so that you can do many things on one line. Surely the better way is to just extract the many things to their own function, and repeat until lines are not too long. Are these just a hangover from the unix way of doing things? Commands with names like ls , rm , ps and du (if you could call those names) were hard to find and hard to remember. I know that the everyday usage of commands such as these is different than methods in code so the matter of whether those are bad names is a different matter.
I heard a great quote on this once, something along the lines of: Code is written to be read by humans, not computers If computers were all we cared about we would still be writing assembler, or 1s and 0s for that matter. We have to consider the people who will be using our code, as an API for example, or the person who comes after us and maintains our code. So, unless the language we are using prohibits it, meaningful, real word method and variable names should be considered best practice.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/46894", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/13825/" ] }
46,981
My main area: web development. Of course, I don't expect anybody give away their 'gold mine' or whatever but I am struggling to see where I should be advertising my services. I have one other developer I work with and we have a lot of happy clients - on freelance websites. Thing is, freelance websites just seem to suck the life out of you when you're being out-bidded by ridiculous rates. I want to attract customers who are more concerned about quality and accountability than price. Any suggestions at all? I'm so lost with this. EDIT: Added bounty of 200 - all of my 'reputation'. EDIT: Added second bounty of 50 I did hear of a novel idea. Do work for an opensource project and get featured in their 'trusted developers' section, if they have one. Input?
The web is great. We wouldn't be here if it wasn't. However, if you are trying to break into the business of selling services, you might want to consider going lo-fi. Face to face. Boots on the ground. Present yourself/your firm to neighborhood businesses, members of regional chambers of commerce. Become a member of your local chamber. Primarily smaller, but well established firms are what you would look to at first to build a portfolio, and reputation. Local medium to larger firms might be an option after you have proven. Many of these firms have done business the old way for a long time, probably think they are fine w/o the ---IT or web presence (because they don't use it and email is at the least, a challenge). They will be reluctant to invest in services or infrastructure. However that doesn't mean the door is closed. Others potential clients may have a very dated or limited presence and might be due for a 'remodel' or a 'makeover' or a 'tune up' -- something the client can relate to in their line of business. Remind them that in the age of cellphones and the web, google maps and web presence has largely replaced the Yellow Pages. If the negotiation of fees is a potential roadblock, consider offering services as deferred investment -- very nominal charge for putting the product in the field and supporting it (you need to eat and pay rent just like they do -- demonstrate to them shared values). Consider accepting your investment with them as a limited term for profit or revenue sharing from business generated by your work, or by a derivative of your work, to prove out its potential and value. When the tem of investment expires, prepare to transition to a more traditional fee for services model (at a rate closer to market norm). Keep in mind depending structure of client business and formality of investment may require special tax rate, collection, and reporting. Both you and client would have to weigh this consideration, so do your homework before presenting this option. Most importantly, aim to keep the terms of your arrangement clear, and simple.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/46981", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/9682/" ] }
47,028
I did not find the lawyers' SE site, so I thought it best to post here. /* * ...subject to the following conditions: * * The above copyright notice and this permission notice shall be included in all * copies or substantial portions of the Software. * * The Software shall be used for Good, not Evil. * * THE SOFTWARE IS PROVIDED "AS IS"... */ This is the 'non-free', Crockford, No-Evil, MIT-style, license. This license is considered non-free because of this phrase: " The Software shall be used for Good, not Evil. " How could we rewrite this to become a 'free' license, while retaining the original spirit of the sentence?
It's impossible. A requirement of "free" (going by the official open source definition) is to never restrict usage based on endeavor. If you say "you can't use this software to do X" then it's non-free, no matter how evil X is; you're still restricting based on endeavor. Even if you say "You can't use this software to kill a human", then it will still be non-free. But in reality, it doesn't matter much. Someone who intends to do evil is not likely to abide by your license anyway (specially if it's a government). See #6 in http://www.opensource.org/osd.html 6. No Discrimination Against Fields of Endeavor The license must not restrict anyone from making use of the program in a specific field of endeavor. For example, it may not restrict the program from being used in a business, or from being used for genetic research. Rationale: The major intention of this clause is to prohibit license traps that prevent open source from being used commercially. We want commercial users to join our community, not feel excluded from it. The only way I can think of is to add a sentence that's not legally part of the license. "Please don't use this software for $EVIL_PURPOSE".
{ "source": [ "https://softwareengineering.stackexchange.com/questions/47028", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/10204/" ] }
47,032
I have 3 questions about the GPL here: If I use GPL software in my application, but don't modify or distribute it, do I have to release my application under the GPL? What if I modify some software that my application uses. Then do I have to release my application under the GPL, or can I just supply the modified software under the GPLs terms. And what if I use GPL software, but don't modify it, can I distribute it with my application? My case in point is, I have a PHP framework which I use the GeSHi library to highlight some output. Because GeSHi is GPL, does my framework have to be GPL? Can I modify GeSHi for particular use cases of my application if I supply the modifications back to the GeSHi maintainers? Can I redistribute my framework with GeSHi?
If I use GPL software in my application, but don't modify or distribute it, do I have to release my application under the GPL? ANSWER: Your question is a little ambiguous. Two cases: (a) If you do not distribute YOUR APPLICATION, then the answer is No, because you did not distribute your application. For example if it was for internal use only in your company, then you have no obligation to do anything. (b) If you do distribute YOUR APPLICATION, and you used something GPL as part of your application (even if only linking at run-time to a library) - and even if you do not charge money - and even if you do not change that GPL s/w in any way - then you MUST make the source of YOUR APPLICATION available. Making source available does not mean download. IT might be that you must get a written request and you send a photocopy of a listing (see comments: you can't actually send a listing. This was exaggeration to make a point) . You are allowed to charge a "reasonable" handling / copying charge. But you can not escape the obligation to make your own source code available. What if I modify some software that my application uses. Then do I have to release my application under the GPL, or can I just supply the modified software under the GPLs terms. ANSWER: See above. If you used GPL s/w, then you must make your source code available. This includes the modified GPL code. And what if I use GPL software, but don't modify it, can I distribute it with my application? ANSWER: See above. You can distribute it (the GPL code), provided you make your source available. Because GeSHi is GPL, does my framework have to be GPL? ANSWER: If you distribute your framework, then YES. Can I modify GeSHi for particular use cases of my application if I supply the modifications back to the GeSHi maintainers? ANSWER: You can if you want to. You don't have to. You could modify it, but when you distribute your application you are obliged to make your source available and also the source for the modifications you made to the library. Can I redistribute my framework with GeSHi? ANSWER: You can if you want to. If your application is not distributed with the GPL code and you make users download it separately to make use of it, then your case is a little bit more special and might provoke some argument, but the same principle will most likely ultimately apply: you must make your source available. If you want to avoid these problems then you need to use things with a different license or at the very least the LGPL which will allow run-time calling of libraries without the viral-spread of the GPL conditions back to your code. When in doubt you need legal advice. Any advice you get here (from me or anyone else)should be treated fairly carefully. Only a lawyer can give you proper legal advice.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/47032", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/3139/" ] }
47,076
I'm still a student, without much real life experience in programming. I've never written anything bigger than ~5k lines of code. I've written code in both Flash and Java, and I just can't understand why people are writing web applications like video players (YouTube etc.) in Flash, and not as Java applets. So I want to ask you, programmers with hands on experience, for some wise words on the subject. I see no real benefit of Flash over applets, while on the other hand, at least to me, it seems buggy as hell. I understand it is faster to make something in Flash, and I can see why it would be a good thing for fast prototyping, but in general, is it worth it? Every time a YouTube goes berserk, i curse the developers for writing it in Flash. And if you are using Linux with Opera, this happens all the time. So, basically, why are people using flash, and not applets?
Flash provides a more seamless experience for the user. Java applets are pretty slow, since the Java VM needs to be fired up before they can run. As a website visitor, I hate it when things freeze for a few moments while the Java VM figures itself out. If I'm not mistaken, it also doesn't unload once I navigate away from the website that used it, leaving it hanging around when I didn't really want it to run in the first place. My (admittedly limited) experience with Flash and applet development also tells me that developing an animation in Flash is easier. And then there's history. Microsoft didn't do Java applets any favours by developing their own JVM and making it behave differently from Sun's. As a result, the same applet could work in one browser and not another, which made creating Java applets less viable. Java does have free tools that can be used as opposed to proprietary Flash editors required to make Flash videos, but ultimately its heavy-handed approach makes it inferior.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/47076", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/16488/" ] }
47,176
In my experience interviewing developers I feel like candidates who've achieved a Masters in Comp Sci tend to be worse programmers on average that those who don't have a Masters. Is that just me, or have others noticed this phenomenon? If so, why would that be the case? UPDATE I appreciate the thoughtful comments. I think I should have been clearer in the comparison I'm making. Given two candidates who graduated from college around the same time, someone who went on to gain a Masters seems on average to be a worse programmer than someone who spent all their time in industry.
First of all, people with a Master's come in different varieties: A fresh graduate from a Master's program A Ph.D. student, who quit the program and left with the Master's Someone who got a Master's years ago, and who has had lots of experience since then Someone who has worked for years and then went back to school to get a Master's Got into a Master's program to get into the country. 1) is definitely no worse than a fresh college graduate, and probably better. He may lack real-world experience of working in a team, code management, etc., but he is likely to have a solid foundation. 2) could be problematic. Academia is not about building working systems, it is about getting publications. That is a very different mindset, with a much greater emphasis on algorithms, and much less emphasis on implementation, efficiency, and coding practices. This often leads to very sloppy code. Despite that, there are certainly people, who are able to maintain their programming skills through their grad school years, and are also able to switch their mindset and do very well in industry. The trick is to be able to tell the difference between "smart" and "smart and gets things done". 3 and 4 are basically the same, as far as hiring is concerned. 5) Could be anything. Need to look at the history, and talk to the person. Naturally this is all a gross oversimplification. There are many other factors, not the least of which is which school the degree is from. In all cases, you have to talk to the person. Edit: Upon reflection, 3 and 4 are not the same. If someone has a Master's from years ago, and lots of experience after that, then you are getting a solid foundation plus experience. If someone went back to get a Master's after years of working in the industry, then you are getting somebody with lots of experience, who is also willing and able to learn new things.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/47176", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/17066/" ] }
47,197
Lately I have been learning of more and more programmers who think that if they were working alone, they would be faster and would deliver more quality. Usually that feeling is attached to a feeling that they do the best programming in their team and at the end of the day the idea is quite plausible. If they ARE doing the best programming, and worked alone (and more maybe) the final result would be a better piece of software. I know this idea would only work if you were passionate enough to work 24/7, on a deadline, with great discipline. So after considering the idea and trying to learn a little more, I wonder if there are famous one-man-army programmers that have delivered any (useful) software in the past?
Donald Knuth
{ "source": [ "https://softwareengineering.stackexchange.com/questions/47197", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/394/" ] }
47,323
I want to use an LGPL-licensed library in my app for Microsoft's app marketplace. Is that OK?
Here is an encompassing answer: https://web.archive.org/web/20220712000832/http://answers.google.com/answers/threadview/id/439136.html In short: yes you can. But one important thing to take care of is that the GNU LGPL covered library is dynamically linked, not statically mixed with the main application. It should also be possible to exchange that dynamically linked library for an independently compiled build. Otherwise you have likely intertwined the library and main application code.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/47323", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/17060/" ] }
47,370
Background: I've run through a few tutorials and written some small projects. All is going well enough using Google and StackOverflow . Several times in the last few days I've found myself wondering "what am I missing?" - I feel that I'm still thinking in Java as I write in Python. This question over at StackOverflow is full of tips about what resources to read up on for learning Python, but I still feel that I'm a Java developer with a dictionary (no pun intended) to translate into Python. What I really want to do is refactor my head to be able to write Pythonic Python instead of Java disguised as Python, without losing my Java skills either. So, the crux of my question is: what concepts does a Java dev really need to learn to think Pythonic? This includes anything that needs to be unlearned. Note: I am asking about language concepts, not about language syntax.
A few points in addition to what was already said: Python is dynamic. Creation of a class is an executable statement , as is import of a module; it can be made conditional. A class can be altered after creation; this allows for easy metaprogramming and AOP. There are no interfaces; duck typing rules. If you desperately need them, there are 'abstract base classes (ABCs)', but usually you don't miss interfaces, since there's no static type checking anyway. Though everything is an object, functions come before objects. Having just functions (and no classes) in a module is perfectly fine. Everything is a first-class entity. Passing functions as parameters, returning them and assigning to variables is the norm. Ditto for classes. Methods are just functions; you can handle an instance method as if it were a regular function, pass it around, etc. Use built-in dicts, sets, lists, and tuples. Lists and dicts are mutable, tuples aren't. All of them are very efficient and syntactically succinct. Get used to returning several values from a function using a tuple (you don't even need parentheses). Get used to replacing complex hierarchies of very simple objects with contraptions made of plain lists, tuples, and dicts ('hashtables'), it simplifies life. Python has a fair bit of FP support; learn list comprehensions and then iterators and generators. These help a lot. Any operators can be overloaded by defining proper methods, so addition or comparison can return whatever you want. Remember this working with things like SQLAlchemy. There's no null, only None, a full-fledged object. You can print None just fine, etc. Passing None where another instance is expected results usually in a AttributeError, not an NPE, sometimes farther down the execution pipeline. Due to the fully dynamic nature of Python, you have nearly no static checks . You can refer to a name that never exists in your program (e.g. a typo), or only gets defined in a particular execution path, and nothing will remind you of it until execution actually hits this reference and a NameError is raised. Be careful with the scope of your variables, and write more unit tests. Due to the fully dynamic nature of Python, objects are nearly always malleable. Usually you can add fields and methods even to an instance and thus inadvertently delete or overwrite its state or method set. Be careful assigning attributes. This allows for interesting possibilities, too :) There are no symbolic constants , only variables. Check that you don't inadvertently overwrite a 'constant'. If you want to be positively sure that you can't overwrite a constant, use a function or a property (which is a function in disguise). Python's threads are good for I/O-bound processing, but not for CPU-bound. Don't try to speed up a computational task by running it in parallel threads.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/47370", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/13293/" ] }
47,416
Almost every developer has to answer questions from business side like: Why is going to take 2 days to add this simple contact form? When a developer estimates this task, they may divide it into steps: make some changes to the Database optimize DB changes for speed add front end HTML write server side code add validation add client side javascript use unit tests make sure SEO set-up is working implement email confirmation refactor and optimize the code for speed ... These maybe hard to explain to a non-technical person, who basically sees the whole task as just putting together some HTML and creating a table to store the data. To them it could be 2 hours MAX. So is there a better way to explain why the estimate is high to a non-developer?
You've just done it in your question. Split the task into the individual steps and give estimates for each one. This will show that you've considered all the options and (hopefully) covered all eventualities. If the timescales are too great you can then discuss what parts (e.g. e-mail confirmation) aren't needed in this case with concrete data rather than just trying to cram a quart into a pint pot. Do this often enough and you'll hopefully teach them that there's usually more to a development than meet the eye at first glance.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/47416", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/15950/" ] }
47,515
I understand the importance of well documented code. But I also understand the importance of self-documenting code. The easier it is to visually read a particular function, the faster we can move on during software maintenance. With that said, I like to separate big functions into other smaller ones. But I do so to a point where a class can have upwards of five of them just to serve one public method. Now multiply five private methods by five public ones, and you get around twenty-five hidden methods that are probably going to be called only once by those public ones. Sure, it's now easier to read those public methods, but I can't help but think that having too many functions is bad practice. [Edit] People have been asking me why I think having too many functions is bad practice. The simple answer: it's a gut feeling. My belief is not, for one bit, backed by any hours of software engineering experience. It's just an uncertainty that gave me a "writer's block", but for a programmer. In the past, I have only been programming personal projects. It's just recently that I moved on to team-based projects. Now, I want to ensure that others can read and understand my code. I wasn't sure what will improve legibility. On one hand, I was thinking of separating one big function into other smaller ones with intelligible names. But there was another side of me saying that it's just redundant. So, I'm asking this to enlighten myself in order to pick the correct path. [Edit] Below, I included two versions of how I could solve my problem. The first one solves it by not separating big chunks of code. The second one does separate things. First version: public static int Main() { // Displays the menu. Console.WriteLine("Pick your option"); Console.Writeline("[1] Input and display a polynomial"); Console.WriteLine("[2] Add two polynomials"); Console.WriteLine("[3] Subtract two polynomials"); Console.WriteLine("[4] Differentiate two polynomials"); Console.WriteLine("[0] Quit"); } Second version: public static int Main() { DisplayMenu(); } private static void DisplayMenu() { Console.WriteLine("Pick your option"); Console.Writeline("[1] Input and display a polynomial"); Console.WriteLine("[2] Add two polynomials"); Console.WriteLine("[3] Subtract two polynomials"); Console.WriteLine("[4] Differentiate two polynomials"); Console.WriteLine("[0] Quit"); } In the above examples, the latter calls a function that will only be used once throughout the program's entire runtime. Note: the code above is generalized, but it's of the same nature as my problem. Now, here's my question: which one? Do I pick the first one, or the second one?
Now multiply five private methods by five public ones, and you get around twenty-five hidden methods that are probably going to be called only once by those public ones. This is what is known as Encapsulation , which creates a Control Abstraction at the higher level. This is a good thing. This means that anyone reading your code, when they get to the startTheEngine() method in your code, can ignore all of the lower level details such as openIgnitionModule() , turnDistributorMotor() , sendSparksToSparkPlugs() , injectFuelIntoCylinders() , activateStarterSolenoid() , and all of the other complex, small, functions that must be run in order to facilitate the much larger, more abstract function of startTheEngine() . Unless the problem you are dealing with in your code deals directly with one of those components, code maintainers can move on, ignoring the sandboxed, encapsulated functionality. This also has the added advantage of making your code easier to test. . For instance, I can write a test case for turnAlternatorMotor(int revolutionRate) and test it's functionality completely independent of the other systems. If there is a problem with that function and the output isn't what I expect, then I know what the problem is. Code that isn't broken down into components is much harder to test. Suddenly, your maintainers are only looking at the would be abstraction instead of being able to dig down into measurable components. My advice is to keep doing what you're doing as your code will scale, be easy to maintain, and can be used and updated for years to come.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/47515", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/17183/" ] }
47,558
As a student studying Computer Science in college, I often hear from friends working on various humanitarian projects, and I want to do something myself. But it seems that programmers don't have as many obvious avenues to help out as, say, doctors or teachers. What are some ways in which programmers can put their talent to use for people in poverty?
Use your talent to earn lots of money , and donate a good part of it. As programmers, we are in the lucky situation to be able to earn more money than we need for our personal needs.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/47558", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/2710/" ] }
47,678
The new keyword in languages like Java, Javascript, and C# creates a new instance of a class. This syntax seems to have been inherited from C++, where new is used specifically to allocate a new instance of a class on the heap, and return a pointer to the new instance. In C++, this is not the only way to construct an object. You can also construct an object on the stack, without using new - and in fact, this way of constructing objects is much more common in C++. So, coming from a C++ background, the new keyword in languages like Java, Javascript, and C# seemed natural and obvious to me. Then I started to learn Python, which doesn't have the new keyword. In Python, an instance is constructed simply by calling the constructor, like: f = Foo() At first, this seemed a bit off to me, until it occurred to me that there's no reason for Python to have new , because everything is an object so there's no need to disambiguate between various constructor syntaxes. But then I thought - what's really the point of new in Java? Why should we say Object o = new Object(); ? Why not just Object o = Object(); ? In C++ there's definitely a need for new , since we need to distinguish between allocating on the heap and allocating on the stack, but in Java all objects are constructed on the heap, so why even have the new keyword? The same question could be asked for Javascript. In C#, which I'm much less familiar with, I think new may have some purpose in terms of distinguishing between object types and value types, but I'm not sure. Regardless, it seems to me that many languages which came after C++ simply "inherited" the new keyword - without really needing it. It's almost like a vestigial keyword . We don't seem to need it for any reason, and yet it's there. Question: Am I correct about this? Or is there some compelling reason that new needs to be in C++-inspired memory-managed languages like Java, Javascript and C# but not Python?
Your observations are correct. C++ is a complicated beast, and the new keyword was used to distinguish between something that needed delete later and something that would be automatically reclaimed. In Java and C#, they dropped the delete keyword because the garbage collector would take care of it for you. The problem then is why did they keep the new keyword? Without talking to the people who wrote the language it's kind of difficult to answer. My best guesses are listed below: It was semantically correct. If you were familiar with C++, you knew that the new keyword creates an object on the heap. So, why change expected behavior? It calls attention to the fact that you are instantiating an object rather than calling a method. With Microsoft code style recommendations, method names start with capital letters so there can be confusion. Ruby is somewhere in between Python and Java/C# in it's use of new . Basically you instantiate an object like this: f = Foo.new() It's not a keyword, it's a static method for the class. What that means is that if you want a singleton, you can override the default implementation of new() to return the same instance every time. It's not necessarily recommended, but it's possible.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/47678", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/17252/" ] }
47,695
ASP.NET MVC and Rails have similar area of use, are built around same architecture, both frameworks are relatively new and open source. So as a Rails programmer I'd like to know, what ASP.NET MVC can do and Ruby on Rails can't, and visa versa?
I have developed real applications with both Rails and ASP.NET MVC, but this answer comes with a significant caveat: I learned and developed with pre-version 2 Rails, so it is entirely possible that I am vastly out-of-date with my Rails knowledge. That being said, I don't think that there is anything that can be done with one but not the other. Given any set of requirements for a web application, you should be able to build that app -- probably equally efficiently -- with either Rails or ASP.NET MVC. There are a couple of neat things that -- to the best of my knowledge -- are available in ASP.NET MVC mainly because of aspects of C#/.NET. For example: when I have a page that contains a form that is submitted, I would have an Action that checks to see if it is dealing with a GET or a POST to decide what to do: def edit @item = Item.find(params[:id]) if request.post? @item.update_attributes(params[:item]) redirect_to :action => 'edit', :id => @item.id end end This is a trivial example of it, but the if request.post? pattern is an extremely common one in Rails. For non-trivial cases, the Action code can get big and messy, and often, I'd wish that I could refactor it into separate methods cleanly. In ASP.NET MVC I can do that: public ActionResult Edit() { // Render my page that has the Edit form ... } [HttpPost] public ActionResult Edit(Foothing foo) { // Save my Foothing data ... } I think that being able to cleanly separate the handling of GET and POST requests is neat. Your mileage may vary. The other thing that ASP.NET MVC does that is super cool (again in my opinion) is also related to handling form POSTS. In Rails, I have to query the params hash for all of my form variables. Let's say that I have a form with the fields 'status', 'gonkulated', 'invert' and 'disposition': def edit @item = Item.find(params[:id]) if params[:status] == "new" ... else ... end if params[:gonkulated] == "true" ... else ... end if params[:invert] == "true" ... else ... end # Rest ommited for brevity end But ASP.NET MVC neatly allows me to get all of my form values as parameters to my Action method: [HttpPost] public ActionResult Edit(int id, string status, bool gonkulated, bool invert, int disposition) { ... } Those are the two things that I really loved about ASP.NET MVC or Rails. They are not enough of a reason for any sane or competent developer to choose one framework over the other.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/47695", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/15369/" ] }
47,778
First, some background on me. I have a PhD in CS and have had jobs both as a software engineer and as an R&D research scientist, both at Very Large Corporations You Know Very Well. I recently changed jobs and interviewed for both types of positions (as I have done in the past). My observation: SW engineer job interviews are way, way disproportionately more difficult than CS researcher job interviews, but the researcher job is higher paying, more competitive, more rewarding, more interesting, and has a higher upside. Here's a typical interview loop for researcher: Phone interview to see if my research is in alignment with the lab's research In-person: give presentation on my recent research for one hour (which represents maybe 9 month's worth of work) and answer questions from the audience In-person one-on-one interviews with about 5 researchers, where they ask me very reasonable questions on my work/publications/patents, including: technical questions, where my work fits into related work, and how I can extend my work to new areas Here's a typical interview loop for SW engineer: Phone interview where I'm asked algorithm questions and maybe do some coding. Pretty standard. In-person interviews at the whiteboard where they drill the F*** out of you on esoteric C++ minutia (e.g. how does a polymorphic virtual function call work), algorithms (make all-pairs-shortest-path algorithm work for 1B vertices), system design (design a database load balancer), etc. This goes on for six or seven interviews. Ridiculous. Why would anyone be willing to put up with this? What is the point of asking about C++ trivia or writing code to prove yourself? Why not make the SE interview more like the researcher interview where you give a talk about what you've done? How are technical job interviews for other fields, like physics, chemistry, civil engineering, mechanical engineering?
It is relatively easy to establish if you are technically competent enough to do the research -- you've got publications the hiring managers can read and those publications probably hint at other folks they can talk with to check you out. Software engineering, on the other hand, is a discipline so packed with incompetent wastes of space one needs to do plenty of due diligence making sure that the guy you are hiring can in fact write the code you are planning to hire him to write.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/47778", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/8402/" ] }
47,789
I was cruising around the programming blogosphere when I happened upon this post about GOTO's: http://giuliozambon.blogspot.com/2010/12/programmers-tabu.html Here the writer talks about how "one must come to the conclusion that there are situations where GOTOs make for more readable and more maintainable code" and then goes on to show an example similar to this: if (Check#1) { CodeBlock#1 if (Check#2) { CodeBlock#2 if (Check#3) { CodeBlock#3 if (Check#4) { CodeBlock#4 if (Check#5) { CodeBlock#5 if (Check#6) { CodeBlock#6 if (Check#7) { CodeBlock#7 } else { rest - of - the - program } } } } } } } The writer then proposes that using GOTO's would make this code much easier to read and maintain. I personally can think of at least 3 different ways to flatten it out and make this code more readable without resorting to flow-breaking GOTO's. Here are my two favorites. 1 - Nested Small Functions. Take each if and its code block and turn it into a function. If the boolean check fails, just return. If it passes, then call the next function in the chain. (Boy, that sounds a lot like recursion, could you do it in a single loop with function pointers?) 2 - Sentinal Variable. To me this is the easyest. Just use a blnContinueProcessing variable and check to see if it is still true in your if check. Then if the check fails, set the variable to false. How many different ways can this type of coding problem be refactored to reduce nesting and increase maintainability?
It is really hard to tell without knowing how the different checks interact. Rigorous refactoring might be in order. Creating a topology of objects that execute the correct block depending on their type, also a strategy pattern or state pattern might do the trick. Without knowing what to do best I would consider two possible simple refactorings that could be further refactored by extracting more methods. The first one I don't realy like since I always like as litle exit points in a method (preferably one) if (!Check#1) { return; } CodeBlock#1 if (!Check#2) { return; } CodeBlock#2 ... The second one remove's the multiple returns but also adds a lot of noise. (it basicly only removes the nesting) bool stillValid = Check#1 if (stillValid) { CodeBlock#1 } stillValid = stillValid && Check#2 if (stillValid) { CodeBlock#2 } stillValid = stillValid && Check#3 if (stillValid) { CodeBlock#3 } ... This last one can be refactored nicely into functions and when you give them good names the result might be reasonable'; bool stillValid = DoCheck1AndCodeBlock1() stillValid = stillValid && DoCheck2AndCodeBlock2() stillValid = stillValid && DoCheck3AndCodeBlock3() public bool DoCheck1AndCodeBlock1() { bool returnValid = Check#1 if (returnValid) { CodeBlock#1 } return returnValid } All in all there are most likely way better options
{ "source": [ "https://softwareengineering.stackexchange.com/questions/47789", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/16284/" ] }
47,806
I'm looking for good examples of finite state machines; language isn't particularly important, just good examples. Code implementations are useful (generalized pseudo-code), but it's also very useful to gather the various uses of FSM's. Examples don't necessarily need to be computer based, for example Mike Dunlavey's Railroad networks example, is very useful.
A Safe (event triggered) States : Multiple "locked" states, one "unlocked" state Transitions : Correct combinations/keys move you from initial locked states to locked states closer to unlocked, until you finally get to unlocked. Incorrect combinations/keys land you back in the initial locked state (sometimes known as idle . Traffic Light (time triggered | sensor [event] triggered) States : RED, YELLOW, GREEN (simplest example) Transitions : After a timer change RED to GREEN, GREEN to YELLOW, and YELLOW to RED. Could also be triggered on sensing cars in various (more complicated) states. Vending Machine (event triggered, a variation of the safe ) States : IDLE, 5_CENTS, 10_CENTS, 15_CENTS, 20_CENTS, 25_CENTS, etc, VEND, CHANGE Transitions : State changes upon insertion of coins, bills, transition to VEND upon correct amount of purchase (or more), then transition to CHANGE or IDLE (depending how ethical your Vending Machine is)
{ "source": [ "https://softwareengineering.stackexchange.com/questions/47806", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/3787/" ] }
47,813
This is a question that was put to me many years ago as a gradute in a job interview and it's nagged at my brain now and again and I've never really found a good answer that satisfied me. The interviewer in question was looking for a black and white answer, there was no middle ground. I never got the chance to ask about the rationale behind the question, but I'm curious why that question would be put to a developer and what you would learn from a yes or no answer? From my own point of view, I can read Java, Python, Delphi etc, but if my manager comes up to me and asks me how far along in a project I am and I say "The code is 80% complete" (and before you start shooting me down, I've heard this uttered in a couple of offices by developers), how exactly is that self documenting? Apologies if this question seems strange, but I'd rather ask and get some opinions on it to gain a better understanding of why it would be put to someone in an interview.
Partially. Code that uses Big English Words can be partially self-documenting in that the names for all the functions and variables can tell you what it is doing. But it probably won't tell you why. Compare: a->b; # what is b doing? what is the a object? carEngine->startIgnition; # much more clear what objects are involved But you still don't know why a car is being started. Hence, only partially. It's kind of terrible that your interviewer was expecting a black and white answer, unless his view of black and white included a very strong maybe.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/47813", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/15455/" ] }
47,838
Why do we keep using CSV? I recently made a shift to working the health domain and despite the wonderful work in data transfer standards, all data transfer is in CSV , both for reporting to external organisations, and for data migrations when implementing new systems. Unfortunately the use of CSV is the cause of the endless repetition of the same stupid errors, with the same waste of developer time. (bad escaping, failing to handle null fields etc.) I know we can do better, and anything between JSON and XML (depending on the instance) would be fine. (Most of the time this is data going from one MS SQLserver 2005 to another!) I feel as if each time I see this happening I am literally watching one developer waste anothers time. So why do we keep shafting each other? When will we stop?
Let me throw out a few points in favor of CSV: CSV is simple(r than any alternative suggested in OP) to implement and parse CSV is understood by almost every piece of software on the planet (past and present) CSV forces a fairly flat, simple schema (there is a single flat list of fields) CSV is more human-readable than XML, JSON, or (UGH!) HL7 (V2.x, pre-xml)
{ "source": [ "https://softwareengineering.stackexchange.com/questions/47838", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/17289/" ] }