source_id
int64 1
4.64M
| question
stringlengths 0
28.4k
| response
stringlengths 0
28.8k
| metadata
dict |
---|---|---|---|
1,371 | Having worked on a failed project is one of the few things that most programmers have in common, regardless of language used, industry or experience. These projects can be great learning experiences, soul-crushing disasters (or both!), and can occur for a multitude of reasons: upper management change of heart under-skilled / under-resourced team emergence of superior competitor during dev cycle over/under management Once you've worked on a couple of such projects, is it possible to recognise at an early stage exactly when a project is doomed to fail? For me, a big sign is having a hard & fast external deadline combined with feature creep . I've seen projects which were well planned out and proceeding right on schedule go horribly off the rails once the late feature requests started to roll in and get added to the final "deliverable". The proposers of these requests earned the nickname of Columbo , due to rarely leaving the room without asking for "just one more thing". What are the warning signs you look out for that set off the alarm bells of impending doom in your head? | Heroic Coding Coding late into the night, working long hours, and clocking lots of overtime are a sure sign that something went wrong. Further, my experience is that if you see someone working late at any point in the project, it only ever gets worse. He might be doing it just to get his one feature back on schedule, and he might succeed; however, cowboy coding like that is almost always the result of a planning failure that will inevitably cause more of it soon. So, the earlier in the project you see it, the worse it will eventually become. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/1371",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/141/"
]
} |
1,376 | I have a tester that while testing will have an error occur (ok so far), but then he frequently reports it right away. We (the developers) then later find that the tester has not tried to reproduce the issue and (when asked) cannot find a way to make it happen again. Now these are still bugs, I don't want to ignore them. But without repro steps I am kind of stuck. Sometimes there is a stack trace (though frequently it is not useful because this is compact framework and there are no line numbers). But when there is one I can take the stack trace and crack open the code and start guessing, but that does not lead to testable "fixes". What do you do in scenarios like this? | A bug without context is not a bug, it's a fluke. The problem could be your code, it could be a third party library, it could be the hardware, or it could be solar radiation causing a single bit to flip on it's own. If you can't reproduce it with at least some regularity (even if only "it happens once every 10 or 20 times I do X"), it's not much better than your tester telling you "Something somewhere went wrong somehow - fix it". You may have to explain to your tester that his job is not to just generate input until something breaks. If it were, you could replace him with a random number generator. Part of his job is to identify bugs, which entails identifying how to produce them. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/1376",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/71/"
]
} |
1,380 | We are starting a push for code coverage here at my work, and it has got me to thinking.... How much code coverage is enough? When do you get to the point of diminishing returns on code coverage? What is the sweet spot between good coverage and not enough? Does it vary by the type of project your are making (ie WPF, WCF, Mobile, ASP.NET) (These are C# classes we are writing.) | I'm of the opinion that code coverage alone is a poor metric. It's easy to produce tons of useless tests that cover the code, but don't adequately check the output, or don't test edge cases, for example. Covering code just means it doesn't throw an exception, not that it's right. You need quality tests- the quantity isn't that important. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/1380",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/71/"
]
} |
1,386 | Sometimes I feel like a musician who can't play live shows. Programming is a pretty cool skill, and a very broad world, but a lot of it happens "off camera"- in your head, in your office, away from spectators. You can of course talk about programming with other programmers, and there is peer programming, and you do get to create something that you can show to people, but when it comes to explaining to non programmers what is it that you do, or how was your day at work, it's sort of tricky. How do you get the non programmers in your life to understand what is it that you do? NOTE: this is not a repeat of Getting non-programmers to understand the development process , because that question was about managing client expectations. | Three Words: dumb it down Programming is complex. It takes a lot of work to understand it. And the joys of programming are even more subtle. For me to communicate my successes and such to others (ie family) I have to communicate on a more common level. Compare programming to normal real world things. (ie an object to a car with a dash board and seats and ....) It is even better if you know something about your audience because you can use things that they understand that are more complex than normal everyday concepts. For example, my wife was a school teacher, so I can compare some of my software development processes to teaching processes she had to use. It helps immensely. But in the end you got to simplify, simplify and simplify some more. And even then, it is hard to get someone to understand how cool a well crafted class with good unit tests is. :) | {
"source": [
"https://softwareengineering.stackexchange.com/questions/1386",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/92/"
]
} |
1,483 | I've heard it said (by coworkers) that everyone "codes in English" regardless of where they're from. I find that difficult to believe, however I wouldn't be surprised if, for most programming languages, the supported character set is relatively narrow. Have you ever worked in a country where English is not the primary language? If so, what did their code look like? | I'm Italian and always use English, for names and comments. But many other Italian programmers use Italian language, or more often a strange English-Italian mix (something like IsUtenteCopy ). A real life code sample: // Trovo la foto collegata al verbale
tblVerbali rsVerbale;
hr = rsVerbale.OpenByID(GetDBConn(), m_idVerbale);
if( FAILED(hr) )
throw CErrorHR(hr);
hr = rsVerbale.MoveFirst();
if( S_OK != hr )
throw CError(_T("Record del verbale non trovato.")); By the way, the Visual Studio MFC wizard creates a skeleton application with localized comments: BOOL CMainFrame::PreCreateWindow(CREATESTRUCT& cs)
{
if( !CMDIFrameWndEx::PreCreateWindow(cs) )
return FALSE;
// TODO: modificare la classe o gli stili Window modificando
// la struttura CREATESTRUCT
return TRUE;
} | {
"source": [
"https://softwareengineering.stackexchange.com/questions/1483",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/91/"
]
} |
1,533 | If you're developer (Senior or Lead Developer) and you'd rather stay with code/design than pursue a management career, what are the available career paths at your company, or any you've heard of? How far can you go? Is it possible to continue being a geek until you bite the dust or is that too naive? Are people like Uncle Bob for example still considered developers, as they claim? | I am going to go out on a limb here and say something that is not likely to be the answer you want to hear, but if you don't like management, your career path is going to be very limited. If what you like to do is code, and if you are really good at it, and you don't want to stop, then your career path is on a single trajectory: software engineer and then senior software engineer. If others recognize how good you are then their inclination will tend towards putting you in a position where you can transmit your experience to others. In other words, they will want you to manage and/or direct. It is hard to take on that added responsibility without taking on some form of management. If you are an architect and responsible for a system's design, and if you want that design to be successfully implemented, you will need to lead and manage others. If you become a founder of a start-up and you become successful, then chances are at some point you will need to hire someone to help, and then you will need to manage them. If you become a CTO then there is no way in hell you will be able to not manage. That being said, I don't think this question can be answered without understanding what it is about management you don't like. Do you not like managing personnel? Do you not like the idea of coding not being your primary responsibility? Do you not like the burden of responsibility for something's success? The answer to that underlying question will help point you in the right direction. Or to put in another way, a way we should all be thinking about the work we do, is this: what does your ideal job look like? Forget about titles for a second, forget about the company you work for, just focus on your day-to-day life, and those things that will make you truly happy and thrive. Then work to create that position at the place you work, or at a company you build yourself. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/1533",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/212/"
]
} |
1,701 | I am currently reading the recently published Being Geek by Michael "Rands" Lopp and I can't get enough of it. Is there any other career guidance books aimed directly or indirectly at programmers that are worth reading? | Code complete The Pragmatic Programmer | {
"source": [
"https://softwareengineering.stackexchange.com/questions/1701",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/64/"
]
} |
1,745 | To put it another way... What is the most commonly held and frustrating misunderstanding about programming, you have encountered? Which widespread and longstanding myths/misconceptions do you find hard for programmers to dispel/correct . Please, explain why this is a myth. | That because you're a programmer, you know how to fix [person]'s virus ridden machine. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/1745",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/389/"
]
} |
1,750 | I work for a company that supports several languages: COBOL, VB6, C#, and Java. I use those languages for my primary work, but I often find myself coding some minor programs (e.g. scripts) in Python because I find it to be the best tool for that type of task. For example: an analyst gives me a complex CSV file to populate some DB tables, so I use Python to parse it and create a DB script. What's the problem? The main problem I see is that a few parts of these quick and dirty scripts are slowly gaining importance and: My company does not support Python They're not version controlled (I back them up in another way) My coworkers do not know Python The analysts have even started to reference them in emails ("launch the script that exports..."), so they are needed more often than I initially thought. I should add that these scripts are just utilities that are not part of the main project; they simply help to get trivial tasks done in less time. For my own small tasks they help a lot. In short, if I were a lottery winner to be in a accident , my coworkers would need to keep the project alive without those scripts; they would spend more time fixing CSV errors by hand, for example. Is this a common scenario? Am I doing something wrong? What should I do? | You need to get the situation formalised as it shouldn't really have got to this point. However, these things happen so you need to explain to your boss that you created these scripts for personal use, but they've "escaped" into wider circulation. Admit (if necessary) that you were at fault for not bringing this to his attention sooner. At the very least the scripts should be put under source control "just in case" - then at least if you aren't available (for what ever reason) your co-workers will have access to the scripts. Then you either need to convince your boss that Python is the way to go for these or accept that you are going to have to re-write them in a supported language. If the cost of documenting the scripts and educating your co-workers in Python is lower than that of the re-write you might even win the argument. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/1750",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/74/"
]
} |
1,752 | In fact this question is about cautions to be taken to enhance quality user experience and reduce avoidable support calls. | A lack of proper input validation is one of those things which tends to lead quite quickly to users doing "bad" things with your application, when it should really be handled by the programmer. I've seen legacy apps where users have been trained to: not enter apostrophes in names not enter any symbol other than a-z0-9, ensure there are no spaces before or after the text they've entered check that a correctly formatted email address is being entered in to the email field, otherwise subsequent mailings to that user will use whatever's in the field and will fail make sure " http:// " is put before web addresses etc etc All of the above issues are ones which should be handled by an application developer. When your input validation is essentially "make sure the user knows what format this field should be in and trust what they've entered is right", then unexpected things are bound to find their way in to the app. Aside from the obvious security implications, users make mistakes. As programmers we often produce our best products by bending over backwards to make sure that the user can't get it wrong, no matter how hard they try! | {
"source": [
"https://softwareengineering.stackexchange.com/questions/1752",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/389/"
]
} |
1,785 | Please, stay on technical issues , avoid behavior, cultural, career or political issues. | The bug is in your code, not the compiler or the runtime libraries. If you see a bug that cannot possibly happen, check that you have correctly built and deployed your program. (Especially if you are using a complicated IDE or build framework that tries to hide the messy details from you ... or if your build involves lots of manual steps.) Concurrent / multi-threaded programs are hard to write and harder to properly test. It is best to delegate as much as you can to concurrency libraries and frameworks. Writing the documentation is part of your job as a programmer. Don't leave it for "someone else" to do. EDIT Yes, my point #1 is overstated. Even the best engineered application platforms do have their share of bugs, and some of the less well engineered ones are rife with them. But even so, you should always suspect your code first , and only start blaming compiler / library bugs when you have clear evidence that your code is not at fault. Back in the days when I did C / C++ development, I remember cases where supposed optimizer "bugs" turned out to be a due to me / some other programmer having done things that the language spec says have undefined results. This applies even for supposedly safe languages like Java; e.g. take a long hard look at the Java memory model (JLS chapter 17). | {
"source": [
"https://softwareengineering.stackexchange.com/questions/1785",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/389/"
]
} |
1,849 | If you've always loved unit testing, good for you! But for the unfortunate ones who weren't born with a liking for it, how have you managed to make this task more enjoyable ? This is not a "what is the right way to unit test" question. I simply want to know little personal tricks that reduce the boredom (dare I say) of writing unit tests. | Firstly, I agree with you - if you are writing your unit tests on already completed code, or you are manually unit testing your code, I find that extremely boring too. I find there are two ways of unit testing for me that really make it enjoyable: By using Test Driven Development (TDD) - writing the tests first allows me to think about the next piece of functionality or behaviour that I need in my code. I find driving towards my end goal in tiny steps and seeing tangible progress towards that goal every few minutes extremely rewarding and enjoyable. When there are bugs, rather than going straight to the debugger, it's a fun challenge to figure out a way to write a failing unit test that reproduces the bug. It's extremely satisfying to finally figure out the circumstances that make your code fail, then fix it and watch the bar turn green for the new failing test (and stay green for all of your existing tests). | {
"source": [
"https://softwareengineering.stackexchange.com/questions/1849",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/153/"
]
} |
1,890 | What is the best practice, most commonly accepted naming conventions for private variables in C#? private int myInteger; private int MyInteger; private int mMyInteger; private int _myInteger; private int _MyInteger; Mysterious other option Which do you use and why? (My company is fairly new to C# and I would like to pick the most "industry accepted" method to try and get into our coding standard.) | The MSDN class design guidlines http://msdn.microsoft.com/en-us/library/ta31s3bc.aspx recommends option 1 - myInteger. I have always used this style. I have a personal dislike for the _ character. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/1890",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/71/"
]
} |
1,997 | People make mistakes, even in the real life... Which should we, geeky programmers, avoid? | Learn that what constitutes "An acceptable degree of precision" to you is "Annoying goddamn nitpicking" to most of the world. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/1997",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/666/"
]
} |
2,051 | See title, but I am asking from a technical perspective, not Take my 40 year old virgin niece on a date or you're fired. | To market Neal Stephenson's sci-fi thriller Snow Crash, I was asked to write a "benign" computer virus. It would "benignly" pretend to take over the user's computer and replace the screen with snow, a.k.a., a "snow crash." After a minute or so of snow, the snow would fade out and be replaced by an advertisement for the book. This would be "benign," you see. The virus would spread through normal means, but nobody would mind because after taking over their computer "you'd just get a fun ad and then be relieved that nothing bad happened to your computer." I was actually told to do this at a major worldwide corporation. I had to write a memo explaining all the laws this would break and all 17 bad things that could happen if they really made me implement this. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/2051",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/719/"
]
} |
2,164 | So I'm sure everyone has run into this person at one point or another, someone catches wind of your project or idea and initially shows some interest. You get to talking about some of your methods and usually around this time they interject stating how you should use method X instead, or just use library Y. But not as a friendly suggestion, but bordering on a commandment. Often repeating the same advice over and over like a overzealous parrot. Personally, I like to reinvent the wheel when I'm learning, or even just for fun, even if it turns out worse than what's been done before. But this person apparently cannot fathom recreating ANY utility for such purposes, or possibly try something that doesn't strictly follow traditional OOP practices, and will settle for nothing except their sense of perfection, and thus naturally heave their criticism sludge down my ears full force. To top it off, they eventually start justifying their advice (retardation) by listing all the incredibly complex things they've coded single-handedly (usually along the lines of "trust me, I've made/used program X for a long time, blah blah blah"). Now, I'm far from being a programming master, I'm probably not even that good, and as such I value advice and critique, but I think advice/critique has a time and place. There is also a big difference between being helpful and being narcissistic. In the past I probably would have used a somewhat stronger George Carlin style dismissal, but I don't think burning bridges is the best approach anymore. Do you have any advice on how to deal with this kind of verbal flogging? | Don't just let them talk. Get them in front of a keyboard. The phrase "ok, show me" should do it. My experience is most blow hards aren't that great, and when they actually try to do what they say it doesn't work and things get real quiet. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/2164",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/895/"
]
} |
2,192 | What things tend to slow a developer down? Please try to refrain from posting answers that: are slow now but useful in the feature. (TDD, Refactoring, ...) list a distraction . | Oh this ones easy: Meetings More Meetings Meetings about the last meeting Meetings to prepare for the upcoming meeting Developing a power point presentation for a meeting Developing a power point presentation for a meeting discussing features that haven't been implemented, shouldn't be implemented, and for whatever reason that guy from sales will jump all over. I can't predict what document you want displayed in the app based upon your current location without an internet connection or access to your hard-drive. No really, just give up asking for it too. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/2192",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/666/"
]
} |
2,331 | Please, explain why and list which languages have the (mis)feature implemented As far you know. Post what you consider a harmful feature, not what you dislike. | Allow Null by default, the "trillion"* dollar mistake. Sorry Tony Hoare.
Almost every language available on planet. Tony Hoare explains *I adjusted the expression coined by Tony Hoare to reflect actual loss on these days :-) | {
"source": [
"https://softwareengineering.stackexchange.com/questions/2331",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/389/"
]
} |
2,654 | In response to This Question , I ask: What are the best parts of your coding standard? What are the best practices that help with code quality, reliability, maintainability, readability, etc. Please include the language, the item from the standard, and the reason it improves your code. | All Languages: Write readable code instead of comments A comment followed by a block of code can be replaced by a method which states the intent just as well as the comment, and makes the code more modular and reusable as
well. It makes refactoring happen more often. It helps us write simple, readable clean code . Readable code is a joy to work with. It tends to make methods short and sweet. It avoids comments getting out of sync with the code It challenges you to rewrite commented code that is hard to understand. Compare this: public void update() {
// Fetch the data from somewhere
lots of lines of;
code;
for;
fetching;
data;
from somewhere;
// Sort the data
more lines of;
code;
which sorts;
stuff;
around;
a bit and then;
// Update the database
lines of code;
which uses;
some lib;
to update;
using iteration;
and logic;
the database;
done;
} With this version where comments are replaced with function calls: public void update() {
data = fetchData();
sorted = sortResults(data);
updateDatabase(sorted);
} | {
"source": [
"https://softwareengineering.stackexchange.com/questions/2654",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/145/"
]
} |
2,699 | This is a "Share the Knowledge" question. I am interested in learning from your successes and/or failures. Information that might be helpful... Background: Context: Language, Application,
Environment, etc. How was the bug identified ? Who or what identified the bug ? How complex was reproducing the bug ? The Hunting. What was your plan ? What difficulties did you encounter ? How was the offending code finally found ? The Killing. How complex was the fix ? How did you determine the scope of the fix ? How much code was involved in the fix ? Postmortem. What was the root cause technically ? buffer overrun, etc. What was the root cause from 30,000 ft ? How long did the process ultimately take ? Were there any features adversely effected by the fix ? What methods, tools, motivations did you find particularly helpful ? ...horribly useless ? If you could do it all again
?............ These examples are general, not applicable in every situation and possibly useless. Please season as needed. | It was actually in a 3rd party image viewer sub-component of our application. We found that there were 2-3 of the users of our application would frequently have the image viewer component throw an exception and die horribly. However, we had dozens of other users who never saw the issue despite using the application for the same task for most of the work day. Also there was one user in particular who got it a lot more frequently than the rest of them. We tried the usual steps: (1) Had them switch computers with another user who never had the problem to rule out the computer/configuration. - The problem followed them. (2) Had them log into the application and work as a user that never saw the problem. - The problem STILL followed them. (3) Had the user report which image they were viewing and set up a test harness to repeat viewing that image thousands of times in quick succession. The problem did not present itself in the harness. (4) Had a developer sit with the users and watch them all day. They saw the errors, but didn't notice them doing anything out of the ordinary to cause them. We struggled with this for weeks trying to figure out what the "Error Users" had in common that the other users didn't. I have no idea how, but the developer in step (4) had a eureka moment on the drive in to work one day worthy of Encyclopedia Brown. He realized that all the "Error Users" were left handed, and confirmed this fact. Only left-handed users got the errors, never Righties. But how could being left handed cause a bug? We had him sit down and watch the left-handers again specifically paying attention to anything they might be doing differently, and that's how we found it. It turned out that the bug only happened if you moved the mouse to rightmost column of pixels in the image viewer while it was loading a new image (overflow error because the vendor had a 1-off calculation for mouseover event). Apparently, while waiting for the next image to load, the users all naturally moved their hand (and thus the mouse) towards the keyboard. The one user who happened to get the error most frequently was one of those ADD types that compulsively moved her mouse around a lot impatiently while waiting for the next page to load, thus she was moving the mouse to the right much more quickly and hitting the timing just right so she did it when the load event happened. Until we got a fix from the vendor, we told her just to let go of the mouse after clicking (next document) and not touch it until it loaded. It was henceforth known in legend on the dev team as "The Left Handed Bug" | {
"source": [
"https://softwareengineering.stackexchange.com/questions/2699",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/497/"
]
} |
2,715 | Should curly braces be on their own line or not? What do you think about it? if (you.hasAnswer()) {
you.postAnswer();
} else {
you.doSomething();
} or should it be if (you.hasAnswer())
{
you.postAnswer();
}
else
{
you.doSomething();
} or even if (you.hasAnswer())
you.postAnswer();
else
you.doSomething(); Please be constructive! Explain why, share experiences, back it up with facts and references. | For a long time I argued that they were of equal worth, or so very close to equal that the possible gain by making the right choice was far, far, below the cost of arguing about it. Being consistent is important , though. So I said let's flip a coin and get on to writing code. I've seen programmers resist change like this before. Get over it! I've switched many times in my career. I even use different styles in my C# than in my PowerShell. A few years ago I was working on a team (~20 developers) that decided to ask for input, and then make a decision, and then enforce that across all the code base. We'd have 1 week to decide. Lots of groans & eye-rolling. Lots of "I like my way, because it's better" but no substance. As we were studying the finer points of the question, someone asked how to deal with this issue in brace-on-the-same-line style: void MyFunction(
int parameterOne,
int parameterTwo) {
int localOne,
int localTwo
} Note that it's not immediately obvious where the parameter list ends, and the body begins. Compare to: void MyFunction(
int parameterOne,
int parameterTwo)
{
int localOne,
int localTwo
} We did some reading on how folks around the world had dealt with this problem, and found the pattern of adding a blank line after the open brace: void MyFunction(
int parameterOne,
int parameterTwo) {
int localOne,
int localTwo
} If you're going to make a visual break, you may as well do it with a brace. Then your visual breaks become consistent, too. Edit : Two alternatives to the 'extra blank line' solution when using K&R: 1/ Indent the function arguments differently from the function body 2/ Put the first argument on the same line as the function name and align further arguments on new lines to that first argument Examples: 1/ void MyFunction(
int parameterOne,
int parameterTwo) {
int localOne,
int localTwo
} 2/ void MyFunction(int parameterOne,
int parameterTwo) {
int localOne,
int localTwo
} /Edit I still argue that consistency is more important than other considerations, but if we don't have an established precedent , then brace-on-next-line is the way to go. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/2715",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/666/"
]
} |
2,776 | The Joel Test is a well known test for determining how good your team is. What do you think about the points? Do you disagree with any of them? Is there anything that you would add? | Jeff Atwood has The Programmer's Bill of Rights . From the post: Every programmer shall have two monitors Every programmer shall have a fast PC Every programmer shall have their choice of mouse and keyboard Every programmer shall have a comfortable chair Every programmer shall have a fast internet connection Every programmer shall have quiet working conditions This seems to have some items that I'd like to see on Joel's list. Specifically in the area of hardware (dual monitor, fast PC, mouse/keyboard, comfortable chair, fast connection). The only thing not mentioned is having a comfortable and adjustable desk . This could all be added by changing: Current #9: Do you use the best tools money can buy? to Improved #9: Do you use the best tools and equipment money can buy? | {
"source": [
"https://softwareengineering.stackexchange.com/questions/2776",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/86/"
]
} |
2,777 | I have heard a lot of people mention Code Complete as a book worthwhile reading. Unfortunately, I am so busy that I don't have time to read it, so can anyone tell me what the key points of the book are? | Code Complete is about software craftsmanship; it is an advanced-beginner/intermediate-level book, written for the working programmer, but it would still be very useful to someone who's been programming for at least a year. Thus the key points of Code Complete (2nd ed.) are nicely summarized in its Chapter 34, Themes in Software Craftsmanship. As paraphrased from my notes: Conquer Complexity : reduce the cognitive load on your mind via discipline, conventions, and abstraction. Pick Your Process : be conscious of quality from start (requirements) to finish (deployment) and beyond (maintenance). Write Programs for People First, Computers Second : code readability is hugely important for comprehensibility, review-ability, error-rate, error-correction, modifiability, and the consequent development time and quality. Program into Your Language, Not in it : think of the What? and Why? before the How? Focus Your Attention with the Help of Conventions : conventions manage complexity by providing structure where it's needed, so that the ultimate resource - your attention - can be effectively used. Program in Terms of the Problem Domain : work at the highest level of abstraction possible; top-level code should describe the problem being solved. Distinguish OS level, programming language level, low-level implementation structures, low-level problem domain terms, and finally, high-level problem-domain terms that would make total sense to the (non-coder) user. Watch for Falling Rocks : as programming merges art and science, good judgement is vital, including heeding warning signs. Iterate, Repeatedly, Again and Again : iterate requirements, design, estimates, code, code tuning. Thou Shalt Render Software and Religion Asunder : be eclectic and willing to experiment. Don't be an inflexible zealot, it precludes curiosity and learning. Go beyond having just a hammer in your toolbox. But the most important take-aways are in Chapter 33, Personal Character : once you consciously seek to improve as a coder, you can and will. The fastest way to do so is to take on the the attitudes of master coders (humility, curiosity, intellectual honesty, discipline, creativity), while also practicing their habits (many good habits are listed in the book, e.g. choosing good variable/value names). Also, the book makes clear that the gap between average and excellent in software is immense ; that fact alone should drive the conscientious coder to better himself. That's the short of it; the long version is in the book. :) I can also send you my not-so-long, not-so-short notes if you want more details. But the book is certainly money and time well-spent, even if the writing style is tiresome at times. Beyond Code Complete, I'd highly recommend The Pragmatic Programmer . It's for intermediate-level programmers, nicely-written and a great mix of high, medium, and low-level advice. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/2777",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/86/"
]
} |
2,948 | How valuable (or not) do you think daily stand-up meetings are? If you're not familiar with it, this refers to a daily meeting that is part of Scrum adherents (and some other agile methodologies). The idea is that you hold a daily meeting, timeboxed to 15 minutes, and in which everyone must stand (to encourage people to be to-the-point). In the meeting, you go around the room and each say:
- What you did yesterday
- What you plan to do today
- Any blockers or impediments to your progress. Do you think this practice has value? Has anyone worked at a place that's done it, and what did you think? | We had daily standups at my first job. Well, with all the co-ops/interns/temps, it was actually on the long side - usually around 30 minutes. But the idea of a short, timeboxed, daily meeting helped a lot just to know what other people were stuck on - and if it was something I was working on, I could reprioritize my tasks to finish what they needed to continue sooner. It also gave everyone a chance to know what everyone was working on so if someone had an emergency, everyone was at least aware of what was going on - reducing a truck factor is always a good thing. Honestly, every day might be a little extreme in some cases. But the idea of short, regular meetings for everyone to stay on the same page is a valuable addition to any process. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/2948",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/6/"
]
} |
3,199 | Are different version naming conventions suited to different projects? What do you use and why? Personally, I prefer a build number in hexadecimal (e.g 11BCF), this should be incremented very regularly. And then for customers a simple 3 digit version number, i.e. 1.1.3. 1.2.3 (11BCF) <- Build number, should correspond with a revision in source control
^ ^ ^
| | |
| | +--- Minor bugs, spelling mistakes, etc.
| +----- Minor features, major bug fixes, etc.
+------- Major version, UX changes, file format changes, etc. | I tend to follow Jeff Atwood's opinion of the .NET convention of version numbering . (Major version).(Minor version).(Revision number).(Build number) More often than not, for personal projects, I find this to be overkill. The few times where I have worked on substantial projects like search engines in C# I've stuck to this convention and have been able to use it as an internal tracker effectively. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/3199",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/96/"
]
} |
3,233 | As an entrepreneur/programmer who makes a good living from writing and selling software, I'm dumbfounded as to why developers write applications and then put them up on the Internet for free. You've found yourself in one of the most lucrative fields in the world. A business with 99% profit margin, where you have no physical product but can name your price; a business where you can ship a buggy product and the customer will still buy it. Occasionally some of our software will get a free competitor, and I think, this guy is crazy. He could be making a good living off of this but instead chose to make it free. Do you not like giant piles of money? Are you not confident that people would pay for it? Are you afraid of having to support it? It's bad for the business of programming because now customers expect to be able to find a free solution to every problem. (I see tweets like "is there any good FREE software for XYZ? or do I need to pay $20 for that".) It's also bad for customers because the free solutions eventually break (because of a new OS or what have you) and since it's free, the developer has no reason to fix it. Customers end up with free but stale software that no longer works and never gets updated. Customer cries. Developer still working day job cries in their cubicle. What gives? PS: I'm not looking to start an open-source/software should be free kind of debate. I'm talking about when developers make a closed source application and make it free. | Sharing Most of us make use of software that has been provided to use free of charge. As a result, it makes sense to share our own software free of charge as well. Basically, we are exchanging our software for the other free software but without the overhead of actually going through a transaction. There will be leaches who do not contribute, but since distribution is so cheap that does not matter. Selling is Hard Actually trying to sell software makes the process much more difficult as you have to market, collect money, and worry about the legal ramifications of selling to people. For a lone programmer this takes them away from what they really want to be doing. As a result they may release their program simply so that other people can have benefit even if they cannot. A New Model It might be argued that a new model of software development is arriving. The model of selling software is an attempt to take physical-world selling and apply it to software. However, software is not like the physical world. Because distribution is so cheap a couple of issues arise. Letting someone use your software is basically free for you. Attempting to prevent people who haven't paid for the software from using it is really expensive. Under this view, attempting to charge per copy of the software is a losing game. Thus you should attempt to make money on software-related services, not software itself. Thus you might charge for a support contract, hosting services, etc. rather than the right to use the software itself. Incidentally, this model is used by webcomics, web series, etc. which give the primary product away for free and sell related merchandise. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/3233",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/1223/"
]
} |
3,272 | How would you, as someone involved in the hiring process (manager,interviewer, etc) feel about a candidate that has changed jobs every 1-2 years? update Thanks for all the input everybody, some really great responses, and good info in every post. I asked it because I'm currently at my 3 job in the last 5 years and I'm feeling like my position is going nowhere (like the position should have been contract in the first place, not full-time). My only options here seem like transition to a different team doing something I'm not really interested in or look for new work, but I'm a little afraid my recent job history is all short stints. | It depends on the context: In a startup culture (like Silicon Valley), one to two years is the lifetime of many companies, and it's expected you'd be switching your place of employment that often. If you're a contract worker, a contract may only be a short, set timespan. Everywhere else, one to two years is an unusually short stay at a company. In any context, employers are generally looking for a person who's going to be in it for the long haul, whatever the long haul is for the company: Startups are looking for someone who will last until the exit: acquisition, IPO, shuttering, etc. Contract hires should be able to successfully complete their contracts to term. Other companies are looking for an employee who will last long enough to make a return on the investment of hiring them: this can take several years. It's a red-flag to potential employers if you're constantly leaving your job for personal reasons, even if you have perfectly valid reasons. I'd also note that having experience in one context isn't necessarily going to translate to another. For example, if you're a life-long contract worker, it can look just as unappealing to a company looking to hire full-time employees as someone who went from regular job to regular job. Similarly, a person who stayed at a job for 10 years might be unappealing to a startup that wants people who are constantly looking for the next big thing. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/3272",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/719/"
]
} |
3,277 | Today I found a GPLed project on SourceForge whose executables are spreading a virus. This fact has been pointed out several times in reviews of the project and the infected executable is still available for download. Apparently, older executables are not infected, so the project itself does not seem to be made with malicious purpose in mind.
There is no preferred way to contact developers and forums for the project are dead.
What should I do? | If you can't get in touch with the developers, then contact SourceForge. Report the problem, give them detailed information they can use to verify the issue, and they'll (probably) take it down. They're a reputable site and I imagine they wouldn't want to be associated with malware. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/3277",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/98/"
]
} |
3,317 | What's the difference in this terminology? Is one considered more professional than the other? | While the terms can be and often are interchangeable, I view a developer as someone who's involved in the whole process from requirements gathering, through specification and coding to testing and, yes, support. They might not be fully involved in all stages all of the time. A programmer is someone who just concentrates on the coding and has little involvement in the rest of the process. This may be their choice of course. As @sunpech points out in his comment most people writing software these days are (or should be) developers. You have to know much more than just how to code to write good software. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/3317",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/1596/"
]
} |
3,450 | Long winding if conditions should be avoided if at all possible, yet sometimes we all end up writing them. Even if it's a very simple condition, the involved statements are sometimes simply very wordy, so the whole condition ends up being very lengthy. What's the most readable way to format those? if (FoobarBaz::quxQuux(corge, grault) || !garply(waldo) || fred(plugh) !== xyzzy) {
thud();
} or if (
FoobarBaz::quxQuux(corge, grault)
|| !garply(waldo)
|| fred(plugh) !== xyzzy
) {
thud();
} or if (FoobarBaz::quxQuux(corge, grault)
|| !garply(waldo)
|| fred(plugh) !== xyzzy) {
thud();
} or thudable = FoobarBaz::quxQuux(corge, grault);
thudable ||= !garply(waldo);
thudable ||= fred(plugh) !== xyzzy;
if (thudable) {
thud();
} or any other preferences? | Often, a long if condition is the sign of code that needs refactoring, but sometimes you can't avoid it. In those cases, I prefer the first: if (bar || baz || quux) { ... } Because you're able to tell what's going on with one line. However, I'd much rather do something like this, when possible: function foo() {
return bar || baz || quux;
}
if (foo()) { ... } | {
"source": [
"https://softwareengineering.stackexchange.com/questions/3450",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/594/"
]
} |
3,519 | I am C++ developer with some good experience on it. When I try to learn a new language ( have tried Java, C#, python, perl till now) I usually pickup a book and try to read it. But the problem with this is that these books typically start with some very basic programming concepts such as loops, operators etc and it starts to get very boring soon. Also, I feel I would get only theoeritcal knowledge without any practical knowledge on writing the code. So my question is how do you tacke these situations? do you just skip the chapters if its explaining something basic? also, do you have some standard set of programs that you will try to write in every new programming language you try to learn? | Basically by writing code in that language. You need to have a good example application to study/modify otherwise you're starting off on the wrong foot and you might never recover. Years ago the company I worked for at the time decided to use Ada for their next product, but as all the developers used FORTRAN in the previous product we ended up creating FORTRAN constructs in Ada. We never really recovered from that. Having access to the documentation and Stack Overflow is essential otherwise you'll potentially miss the important features of the language. On that score find out who are the Gurus in the language and read their blogs, these will often discuss the new features of a language/framework and also the obscurer areas you'll never find by yourself. If you can't find out who they are ask here! In an ideal world I'd like to learn by myself for a while and then be evaluated, but I've never managed that yet. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/3519",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/123/"
]
} |
3,558 | At some point in time, I just stopped coding for fun. I used to go to work, finish my assignments and then upon arriving home I'd go and write stuff on the side for fun. However, I now just go home and try to avoid the computer. I'd rather read the paper, watch TV, go out to the bar, etc. Is this a bad sign? I mean I still try to keep up on the latest trends, hit up the developer forums/blogs/etc but I haven't said, "I want to learn language X - I wonder if I could write app Y in it" Has this happened to anyone else? | This is a very common issue called burn-out. It happens to everyone that takes their work seriously. My advice is to take a few weeks off from coding and plan a long term project for fun. Then set aside at least 15 minutes each night to complete a part of the project. As long as you take it slow you'll be back in the game in no time. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/3558",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/657/"
]
} |
3,645 | I am a computer science student and learning Java now a days. I want to be a good developer/programmer. I like reading books. I search on the internet for the related topics and study them. I refer to StackOverflow and other good programming websites daily but I code rarely. Is this a bad sign? If yes then what should I do to overcome this problem? | Experience trumps all, if you aren't getting experience then yes you definitely have a problem if you want to be a great programmer. Start on a new project or join another person's open source project. Get some experience. Write some code. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/3645",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/1373/"
]
} |
3,851 | How would you consider that a programmer is bad at what he or she is doing? If possible... How should he/she improve? | When they fail to learn from their mistakes and from peer reviews. We are all green at some point; however, if you're not getting better or attempting to get better then you're a bad programmer. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/3851",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/666/"
]
} |
3,918 | What should you do, if a co-worker is editing your code? Without the purpose of adding functionality or fixing bugs, just to change how it looks... | Talk to them about it. Go into the conversation with the attitude of "They're not doing this to annoy me or because they have some form of obsessive-compulsive disorder; they're trying to make my code better." Because you could be wrong. That could be a subtle bug fix and you just didn't spot it. Or, it could be that there's a coding standard you don't know about that you're violating, and they're just correcting it. Or, it could be that they're trying to annoy you, or they have some form of obsessive-compulsive disorder. If that's the case, ask them nicely to stop, and if that doesn't work, take it up with your boss. But you'll never know unless you ask. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/3918",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/666/"
]
} |
3,956 | In Windows the default way is registry. This allow you to differentiate system-wide and per-user settings. In Unix you should use text files in the /etc folder for system-wide settings (what's the convention for per-user settings?). Many new programs (and especially those designed for being portable) use XML files. What's the best way (and location) to store non-BLOB settings? Should we follow each system default or have a unified solution? And what's the best portable way? | What's the best way (and location) to store non-BLOB settings? On Windows, it seems acceptable to use the registry. In my opinion, the registry was a poorly-devised system, and instead a simple text file in the Users\Username\AppData directory should be preferred. This is easier to back up, less dangerous for users to modify, and easier to clean up. On Linux and most Unixes, The preferred location is /home/user/.config/appname for user-specific settings and /etc/ for global (system-wide) settings. The less-preferred (but acceptable) location for user settings is ~/.appname , but this is generally falling out of favor. These files should be user-editable, so a human-readable format is always preferred. I disagree with most people that XML is an acceptable format for storing non-blob data. It is, in my opinion, an overwrought and excessively complex format for what usually ends up being very small pieces of structured data. I prefer to see files in YAML, JSON, ASN.1, name=value pairs, or similar formats. Having too much syntax makes it too easy for a user to mess up and leave the file in an invalid format. Should we follow each system default or have a unified solution? That is entirely up to you, but keep some things in mind: Platforms like *nix have strict limitations on which locations are writable. More strict than Windows. So: The only place you should write to anything is in the user's home directory. Unless your application is a system service; in which case, all mutable data files should be written in /var/ . Nonmutable data files should be kept in your app directory in /usr/share/ or /usr/local/share/ or /opt/ Configuration files in /etc/ should never be written to by the application when it is running, even if it has write access to them. /etc/ should be the repository for default behaviors and nothing else. Plan for your application to be installed in one of three places: /usr/local/ , /opt/appname , or /home/username/appname . Blobs should be stored alongside other configuration files if they are to be changed. It is generally preferable to use a user-editable format, so something like SQLite or Berkeley DB is preferred (since there are command-line tools for each), but not required. On Windows, your applications should only ever write in the User directory. The standardized location for data files is Users\User\AppData . Nowhere else seems acceptable. On Mac OS X, your application settings should be stored in ~/Library/Preferences along with all of the other applications' plist files. plist seems to be the preferred format, but you'll want to double-check with the Apple guidelines. And what's the best portable way? There is no "best," to be honest. There are only platform-specific limitations and expectations. My recommendation is to stick with platform-specific means, even if it means writing more code. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/3956",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/11/"
]
} |
4,107 | I was browsing some old code that I wrote (first year in the university) and noticed that I used to write comment titles preceeding various parts of the code. Stuff like (this is from a Monopoly game): /*Board initialization*/
...code...
/*Player initialization*/
...code...
/*Game logic starts here*/
/*Displaying current situation*/
...code...
/*Executing move*/
...code...
/*Handle special event*/
...code...
/*Commit changes, switch to next player*/
...code... This might be redundant, and arguably unnecessary if the code is really super clear, but as I scanned through the file it surprised me how strongly I felt like I know what's going on even though I hardly looked at the actual code. I can definitely see this as being fitting in certain circumstances, so I wonder- do you do this? Do you think it's a good idea? Or is it too much? | This is a code smell. This says what and not why . If this is necessary, split the code in small functions. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/4107",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/92/"
]
} |
4,180 | Possible Duplicate: Will high reputation in Stack Overflow help to get a good job? Just curious, what Web2.0 websites do employers use (if any) to pre-screen potential employees? Does any employer actually refer to a user's online "reputation" to get a job? | I can tell you that there are certain employers who do care about your stack overflow reputation score, and will factor it into their hiring. How do I know? Because those employers made me implement -- and I really didn't want to -- a reputation sort on http://careers.stackoverflow.com . It is not the default sort, though, because I insisted that it not be. Anyway, we always tell employers the same thing, that they should look at the content and evaluate someone's merit based on more than a number; the number is just shorthand for a bunch of other factors. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/4180",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/175/"
]
} |
4,200 | Why would you hire in-house over outsourcing in developing a product for your company? I can only think of a few but I'm not entirely sure if they're good enough reason. This is actually for a debate that I'm going to have in class. I'm more inclined on the outsourcing part but unfortunately, I was asked to switch to the in-house side of the debate. Any ideas? | An in-house team will be more responsive to your needs, since they're actually part of your company, so they have a better idea of what you want. An in-house team is easier to communicate with- nothing beats regular face-to-face contact. Your in-house team will have more domain-specific knowledge that an external team would have to learn. You're investing not just in the software, but in the expertise solving the types of software problems your company has. Using your own developers builds up a stock of programmers who've dealt with those specific problems before. (For counter-arguments, see Joel's take on it .) | {
"source": [
"https://softwareengineering.stackexchange.com/questions/4200",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/500/"
]
} |
4,614 | You know who they are. They are the rock stars of programming: They code 10X
faster. Their code just works. They not only know their primary
language inside and out, but they
also know how it works under the
hood. They know the answer to most any
question before you ask it. A few of them invented the programming
principles we all use. And they tend to be
uncharacteristically humble, as well. What is it about these folks? Is there something about their thought process that is fundamentally different from the above-average programmer? Or are they simply very talented people that work hard? To put it another way: How can I be like them? I know what I think I need to learn to be that good, but it seems like it will take me the next ten years to learn it, and then my knowledge will be obsolete. | Humble: An exceptional programmer will never claim their code is the best, in fact they will always be looking for a better way (Every chance they get.) . Patient: An exceptional programmer will have boundless patience (This does not mean they will waste days on a problem. See: Troubleshooter) . Troubleshooter: An exceptional programmer will be able to solve a problem in minutes that may take days for your average programmer. Curious: An exceptional programmer will be unable to resist trying to figure out why something occurs. Engineer: An exceptional programmer will engineer systems rather than hobble together a mishmash of frameworks (This does not mean they won't use frameworks.) . | {
"source": [
"https://softwareengineering.stackexchange.com/questions/4614",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/1204/"
]
} |
4,765 | Have you ever reached a point at your job when you just know it's time to move on? When do you move to the point that you're willing to let go of the demons you know for the ones you don't know? What was your deciding factor final straw so to speak when you finally faced the decision to find a new job? | I had one job where I work up every morning wishing I was sick enough to go to the hospital so I wouldn't have to go to work. At another job, I was working so many hours I was having trouble actually driving home at 2 or 3 am when I went home. Only job I ever quit without having another job, just physically couldn't take one more day and the final straw was when they asked me to do something unethical and illegal. Thanks to my exhaustion, I had a car accident in the parking lot the day I quit. Other signs it's time to move on: You aren't sure if your paycheck will
bounce or not You are part of a Death March The work is boring beyond belief You think someone is sabotaging you
in terms of office politics - you
start getting fewer responsibilities
and less interesting assignments and
Joe is getting the credit for the
things you did and you are starting
to see emails blaming you for things
that someone else did. You simply can't live with the
corporate culture | {
"source": [
"https://softwareengineering.stackexchange.com/questions/4765",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/2305/"
]
} |
4,889 | why not combine the best features of the all existent programming languages and fit it in a universal programming language? | For the same reason you don't use a Swiss army knife to carve a chicken... The Swiss Army knife generally has a blade, as well as various tools, such as screwdrivers and can openers and many others. These attachments are stowed inside the handle of the knife through a pivot point mechanism... The design of the knife and its flexibility have both led to worldwide recognition... | {
"source": [
"https://softwareengineering.stackexchange.com/questions/4889",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/58922/"
]
} |
4,951 | What are the key differences between software engineers and programmers? | When hiring, we look for a distinction between someone who is going to be able to help us architect our system, define processes, create technical specifications, implement advanced refactoring, etc. and someone who is going to help us complete programming tasks off a checklist. I believe you could call the former a Software Engineer and the latter a Programmer . | {
"source": [
"https://softwareengineering.stackexchange.com/questions/4951",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/42/"
]
} |
5,225 | Suppose I develop a useful library and decide to publish it as open source. Some time later I have a business need to do something that wouldn't comply with the open source licence. Am I allowed to do that? How should I publish the software in a way that I keep ownership and don't block myself from using the library in the future in any way? Keep in mind that at least in theory, other developers may decide to contribute to my open-source project. Can I specify in a licence that I as the original developer get ownership of their contributions as well? Don't get me wrong here, I'm not trying to be evil and get ownership of other's work - I just want to keep ownership of mine, and if someone posts an important bugfix I could be rendered unable to use the original code unless I use his work as well. | You always keep ownership under open-source licenses. The work you created is your property, and you can do whatever you want to with it, (within legal limits, of course,) including allowing other people to use it under the terms of an open-source license. If you want to use it for a proprietary project, you're welcome to do so, unless you have completely turned over the rights to someone else by contract. But this is not what open-source licenses do. They're about sharing usefulness, not about giving up ownership. Things get a bit sticker once other people start contributing. It's their work, then, not yours, and you need to get their permission. One thing you can do is publish your library under a dual license. That's what Sam Lantinga, the primary creator and maintainer of SDL , does. Because Apple doesn't like dynamic link libraries for iOS, and complying with the LGPL in a statically linked app is more trouble than it's worth, he publishes SDL under both the LGPL and a commercial license for static iPhone apps. When anyone submits a patch, he explicitly asks them for permission to deploy their patch in the library under both licenses, and if they don't like that, he doesn't add it to the codebase. EDIT: My example is no longer accurate. A while back Sam changed the model (not sure why; maybe he just got tired of the administration hassles) and now licenses SDL under a highly permissive zlib-style license. But he used to do it this way. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/5225",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/2616/"
]
} |
5,232 | I tend to understand things rather quickly, but after 2 years of programming in Python I still stumble across things (like Flask today) that amaze me. I look at the code, have no idea what's going on, and then feel very humbled. I feel like an absolute expert each time this happens, up until the moment it happens. Then, for about a 2 week period I feel like an absolute beginner. Does this often happen, or does it indicated that I have so much more to learn before I can even be considered a "good" programmer? | I call it the "Freshman Feeling". When it seems like everyone else has it together, is going faster, knows all of the buildings on campus, isn't struggling, etc. In programming, I'm disoriented, uncomfortable, unsure whether or not I'll meet the deadline - it's fear. The feeling goes away when I acknowledge the fear for what it is, then ignore it, dive in and begin to learn - struggling through each problem one at a time. The thing is, now, I use it as my gauge to tell me when I'm really learning. If I don't feel it once in a while, I know I'm not moving forward - I'm stagnant. One of the programmers at work has this motto, "Comfort is the enemy." That feeling you talk about can be your best friend if you want to get better. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/5232",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/825/"
]
} |
5,297 | I've only been a year in the industry and I've had some problems making estimates for specific tasks. Before you close this, yes, I've already read this: How to respond when you are asked for an estimate? and that's about the same problem I'm having. But I'm looking for a more specific gauge of experiences, something quantifiable or probably other programmer's average performances which I should aim for and base my estimates. The answers range from weeks, and I was looking more for an answer on the level of a task assigned for a day or so. (Note that this doesn't include submitting for QA or documentations, just the actual development time from writing tests if I used TDD, to making the page, before having it submitted to testing) My current rate right now is as follows (on ASP.NET webforms): Right now, I'm able to develop a simple data entry page with a grid listing (no complex logic, just Creating and Reading) on an already built architecture, given one full day's (8 hours) time. Adding complex functionality, and Update and Delete pages add another full day to the task. If I have to start the page from scratch (no solution, no existing website) it takes me another full day. (Not always) but if I encounter something new or haven't done yet it takes me another full day. Whenever I make an estimate that's longer than the expected I feel that others think that I'm lagging a lot behind everyone else. I'm just concerned as there have been expectations that when it's just one page it should take me no more than a full day. Yes, there definitely is more room for improvement. There always is. I have a lot to learn. But I would like to know if my current rate is way too slow, just average, or average for someone no longer than a year in the industry. | If you're programming for a job, and your superiors are happy with the rate you're turning stuff out at, then I'd say you're doing fine. As you've lasted a year, they're clearly not outraged with your output. Also, you've only been there a year, and assuming they've been managing people for more than a day, they know that there's a learning curve when you're still green. As for estimates... I've been in the industry for 5 years now (certainly not veteran territory, I know!), and my personal estimates still suck. I overestimate almost as often as I underestimate, and I do both far more than I get it right. Something will come up, somewhere, and bite you. Sometimes you'll find a library that does everything you thought you had to do yourself, and a week's work disappears in half a day. Other times a stupid bug will stretch a day's work out to 2, 3, 4... If you're repeating a lot of the same work over and over, and you feel like you've maxed out your throughput on it, maybe you should ask to be moved to another task. 'Cross-pollination' and other PHB-friendly terms are definitely of benefit to devs. If you spend a month or more on something else, maybe you'll find something you're better suited to. If not, or you're not able to stay away from webforms, the change won't do you any harm, and you might come back with a bit more knowledge and experience that will help you. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/5297",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/1668/"
]
} |
5,331 | So you take a contract where you have solid experience with 75% of the technology necessary. How do you handle your time to learn the other 25%? Work it into the billing time? Expose the 25% in the contract as 'research'? Do the learning on my own time (not billed)? Not take the contract (too large of an unknown for me and the customer)? On the extreme end of this, I keep hearing a story about Mark Cuban (Dallas billionaire who started broadcast.com and sold it to Yahoo!) when he was at Indiana University. Someone asked him if he could build a business app for them and he immediately said "Yes"... he had no idea how. So he bought a book, stayed up nights, studied and coded... He finished it (I'm sure it was ugly), it worked and he kept going. I'm not suggesting doing contracts this way (the stress!), but there's a middle ground. What is it, and how would you (or would you?) bill for the unknown? | If I'm learning something that I'll take away with me (like say a mainstream new API, or a new feature of .NET or a language that's somewhat useful) then I don't bill, I consider that time time spent sharpening my saw, and it's not the client's fault I didn't know that stuff yet. Now, if it's something obscure, I bill for it at my normal rate. Some examples: APIs and protocols which are not mainstream (industry specific, small 3rd party or just niche products); internal tools, configuration formats and services inside the client organization; a non-standard database schema, database query language or security model; etc. I've never had any objections about the way I do this, and I'm very transparent about it in my proposals. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/5331",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/2671/"
]
} |
5,427 | Other than being annoyed at whitespace as syntax, I'm not a hater, I just don't get the fascination with Python. I appreciate the poetry of Perl, and have programmed beautiful web services in bash & korn, and shebang gnuplot . I write documents in troff and don't mind REXX. Didn't find tcl any more useful years ago, but what's the big stink about Python ? I see job listings and many candidates with this as a prize & trophy on their resumes. I guess in reality, I'm trying to personally become sold on this, I just can't find a reason. | Python is a well-designed language with a reasonably clean syntax, a comprehensive standard library, excellent included and third party documentation, widespread deployment, and the immediacy of a "scripting" style language (ie. no explicit compile step). | {
"source": [
"https://softwareengineering.stackexchange.com/questions/5427",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/2274/"
]
} |
5,473 | I was reading the wikipedia article on programming style and noticed something in an argument against vertically aligned code: Reliance on mono-spaced font; tabular
formatting assumes that the editor
uses a fixed-width font. Most modern
code editors support proportional
fonts, and the programmer may prefer
to use a proportional font for
readability . To be honest, I don't think I've ever met a programmer who preferred a proportional font. Nor can I think of any really good reasons for using them. Why would someone prefer a proportional font? | Common points against proportional fonts, commented. You cannot precisely align code vertically with proportional fonts. I mean, you could precisely align code vertically with proportional fonts, if everybody was using elastic tabstops , but alas... Some proportional fonts make it hard to distinguish some characters groups. (e.g., mrnm). Not all programming fonts are perfect either, however: Courier New has identical 'O' and '0' and identical '1' and 'l'. Some IDEs have poor support for non-fixed-width fonts (like aforementioned Visual Studio or Python's IDLE). In some contexts, also, you just can't use one. (e.g., terminals.) Choosing a proportional font for coding will get you in endless holy wars. Here, however, the problem exists between the keyboard and the chair. Points in favour of proportional fonts Some characters are just wider than others. Having to cram an m in the same space of an n or an i makes it truly challenging to design a good, readable monospace font . Improved spacing between letters just right. Compare rnW and Ill in this Proggy Clear screenshot for an example of font spacing done wrong. Most programmer fonts lack italic or bold. This makes it hard to use effective syntax highlighting. Vertical alignment is a can of worms anyway. Tabs or spaces or tabs and spaces? Personally, I've been using both the 'Ubuntu' font and WenQuanYi Zen Hei Mono with pleasure and find myself unable to prefer one to the other. :) Ubuntu 10 and WenQuanYi Zen Hei Mono 9, compared. There's no clear winner here, if you ask me. That said, fonts are like food. Some like them well rounded, some like them hot and spicy -- there's no one right font, or all of us would be using it right now. Yay for choice! | {
"source": [
"https://softwareengineering.stackexchange.com/questions/5473",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/1626/"
]
} |
5,540 | How should code in version control be stored? Developer friendly ? so that programmer can quickly take the latest and able to run from his editor without doing many changes? (like config files pointing to dev DB..etc) or Should it be production friendly ? source should be in a manner which is easy to deploy on production environment and when developer takes the latest, he should perform changes as per his development needs. | Why choose ? It should be both. Your development environment should be configured so it's as easy as doing a checkout, open, build, run, debug (eg: no absolute path!). You can do that easily with compilation directives, configuration class + dependancy injection, or even tricks like the perso.config in ASP.NET Your automated build script should be customized enought to take care of specific production configuration, clean up, packaging etc. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/5540",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/1564/"
]
} |
5,597 | I've been doing design and programming for about as long as I can remember. If there's a programming problem, I can figure it out. (Though admittedly Stack Overflow has allowed me to skip the figuring out and get straight to the doing in many instances.) I've made games, esoteric programming languages, and widgets and gizmos galore. I'm currently working on a general-purpose programming language. There's nothing I do better than programming. Is a university education really more than just a formality? | Hooboy. This is a tough position to be in; you have my sympathies. I'm biased towards getting a degree, most likely because 1) I have one (BS in Computer Science) and 2) I've often found the knowledge gained pursuing it to be very useful. But it's hardly a pre-requisite for a successful career; the IT world is rich with people who kick ass, are acknowledged as kicking ass, and who technically don't have more than a high school diploma. The nice thing about a university degree is that you can put it on hold and come back to it later when life permits. (Though the dangerous thing about the previous sentence is that it's a good way to simply quit without admitting to yourself you're quitting.) You can test the waters and see what kind of job you could get by sending your resume out today and seeing what kind of nibbles you get; you haven't committed to anything until you actually say yes to a job offer. And it sounds like your school is a bad fit for you, regardless. If you're so consistently bored with everything they're throwing at you, then you may need to find a school that will do a better job of giving you your money's worth and making you work for that degree. Have you considered transferring somewhere better? Edit: Based on your comments elsewhere, given how much you love the high-level theoretic aspects of programming, have you considered that the best way to continue to explore that and get paid may be a career in academia? Which would definitely require you to get your degree. :-) | {
"source": [
"https://softwareengineering.stackexchange.com/questions/5597",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/2107/"
]
} |
5,749 | I work as the back-end developer, front-end developer, systems admin, help desk and all-around 'guy who knows computers' at a small marketing company of about 15 people. I was wondering if others could share their experiences flying solo at companies that aren't necessarily inclined toward the technology industry. I originally took the job in order to transition from front-end developer/designer to full-time coder. It's been a good experience to a point. I definitely get to occupy the role of 'rock star' programmer - because frankly, no one really understands my job. Lately, it feels like a very solitary position. I rarely get to bounce ideas off of people, and everyone looks to me like I have magic powers that will make all the computers work and land us first on Google searches. I've also felt a strong disconnect versus between what we say we want (projects with large, months-long development schedules) versus what we actually do (copy-edit our sites over and over). So who else finds themselves being the 'tech guy' in a company that thinks technology is all a bit magical, and what is your take on your situation? | Take advantage of the situation you have - to a certain extent, I think you have a little bit of "grassisgreeneritis". Sorry, I'm not trying to be funny. What I am saying is every position at every company has short-comings. Yours are starting to get to you more because they are very familiar. But, at tech companies, schedules and time commitments become an issue. At larger non-tech companies, overcoming political stupidity and procedure can be big issues. So take advantage of what you have now; learn what you can. Once you believe you can't really learn more, it is probably time to move on. There is no harm in that; it sounds like you are one of those people that have to grow to be happy with a job. Your current company should understand that when you reach that point and honestly, if they don't, leaving is definitely the right thing to do. Having said all that, there is more you can do in your current position. If you are feeling solitary, make some changes to eliminate that feeling. Use on-line communities to bounce ideas off of people (StackOverflow is great for this). Do some research with Google to find out what it would take to land your company first and then put a proposal together to get it to happen. When going through projects, take the initiative and change how things happen. Don't go for the impractical, long projects. Instead, propose month long incremental improvements. Over a year, those add up and can really help you feel like you've accomplished something. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/5749",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/1972/"
]
} |
5,757 | I have found that there are only 3 ways to unit test (mock/stub) dependencies that are static in C#.NET: Moles TypeMock JustMock Given that two of these are not free and one has not hit release 1.0, mocking static stuff is not too easy. Does that make static methods and such "evil" (in the unit testing sense)? And if so, why does resharper want me to make anything that can be static, static? (Assuming resharper is not also "evil".) Clarification: I am talking about the scenario when you want to unit test a method and that method calls a static method in a different unit/class. By most definitions of unit testing, if you just let the method under test call the static method in the other unit/class then you are not unit testing, you are integration testing. (Useful, but not a unit test.) | Looking at the other answers here, I think there might be some confusion between static methods that hold static state or cause side-effects (which sounds to me like a really bad idea), and static methods that merely return a value. Static methods which hold no state and cause no side effects should be easily unit testable. In fact, I consider such methods a "poor-man's" form of functional programming; you hand the method an object or value, and it returns an object or value. Nothing more. I don't see how such methods would negatively affect unit testing at all. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/5757",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/71/"
]
} |
5,898 | In another question, it was revealed that one of the pains with TDD is keeping the testing suite in sync with the codebase during and after refactoring. Now, I'm a big fan of refactoring. I'm not going to give it up to do TDD. But I've also experienced the problems of tests written in such a way that minor refactoring leads to lots of test failures. How do you avoid breaking tests when refactoring? Do you write the tests 'better'? If so, what should you look for? Do you avoid certain types of refactoring? Are there test-refactoring tools? Edit: I wrote a new question that asked what I meant to ask (but kept this one as an interesting variant). | What you're trying to do is not really refactoring. With refactoring, by definition, you don't change what your software does, you change how it does it. Start with all green tests (all pass), then make modifications "under the hood" (e.g. move a method from a derived class to base, extract a method, or encapsulate a Composite with a Builder , etc.). Your tests should still pass. What you're describing seems to be not refactoring, but a redesign, which also augments the functionality of your software under test. TDD and refactoring (as I tried to define it here) are not in conflict. You can still refactor (green-green) and apply TDD (red-green) to develope the "delta" functionality. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/5898",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/2983/"
]
} |
5,916 | Someone once said we should prefix all our methods with the /// <summary> comment blocks (C#) but did not explain why. I started to use them and found they annoyed me quite a bit, so stopped using them except for libraries and static methods. They're bulky and I'm always forgetting to update them. Is there any good reason to use /// <summary> comment blocks in your code? I normally use // comments all the time, it's just the /// <summary> blocks I was wondering about. | Use them as much as possible. Yes, those are special comments that become the documentation for the method. The contents of <summary> , the parameter tags, etc. that are generated show up in intellisense when you or someone else is getting ready to call your method. They can essentially see all the documentation for your method or class without having to go to the file itself to figure out what it does (or try to just read the method signature and hope for the best). | {
"source": [
"https://softwareengineering.stackexchange.com/questions/5916",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/1130/"
]
} |
6,014 | A lot of us started seeing this phenomenon with jQuery about a year ago when people started asking how to do absolutely insane things like retrieve the query string with jQuery . The difference between the library (jQuery) and the language (JavaScript) is apparently lost on many programmers, and results in a lot of inappropriate, convoluted code being written where it is not necessary. Maybe it's just my imagination, but I swear I'm starting to see an uptick in the number of questions where people are asking to do similarly insane things with Linq, like find ranges in a sorted array . I can't get over how thoroughly inappropriate the Linq extensions are for solving that problem, but more importantly the fact that the author just assumed that the ideal solution would involve Linq without actually thinking about it (as far as I can tell). It seems that we are repeating history, breeding a new generation of .NET programmers who can't tell the difference between the language (C#/VB.NET) and the library (Linq). What is responsible for this phenomenon? Is it just hype? Magpie tendencies? Has Linq picked up a reputation as a form of magic, where instead of actually writing code you just have to utter the right incantation? I'm hardly satisfied with those explanations but I can't really think of anything else. More importantly, is it really a problem, and if so, what's the best way to help enlighten these people? | It's basically because programming is fundamentally difficult. It requires a lot of logical, structured thought in a way that a lot of people just don't know how to do. (Or simply can't do, depending on who you listen to.) Stuff like LINQ and jQuery makes certain common data-manipulation tasks a whole lot easier. That's great for those of us who know what we're doing, but the unfortunate side effect is that it lowers the bar. It makes it easier for people who have no idea what they're doing to start writing code and make things work. And then when they run into reality, and find something fundamentally difficult that their simple, high-abstraction-level techniques are not well suited to, they're lost, because they don't understand the platform that their library is built upon. Your question is sort of on the right track, but much like the perennial controversy about violent video games "turning kids violent," it has the direction of the link backwards. Easy programming techniques don't make programmers stupid; they just attract stupid people to programming. And there's really not much you can do about it. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/6014",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/3249/"
]
} |
6,045 | Some projects we run internally using are Scrum, while still being "fixed everything" to the customer. We're experiencing mixed success on our part (the customer likes the visibility of the burndown chart). Can the types of projects we work be successfully executed using the agile methods? | I would like to pose a counter-question: Can fixed scope + fixed deadline + fixed price contract ever be made to work, period ? The "good/fast/cheap - pick two" saying isn't just some silly engineering joke. Every project manager worth his salt knows about the Project Management Triangle : You're telling us that the cost, scope, and schedule are all fixed. That leaves no room for maneuverability or error. None . You could choose to view "Quality" as an attribute, but it's not a "real" attribute, it's more like a meta-attribute that's derived from the other attributes (cost/scope/schedule). The problem is that this never happens in reality as long as your project is being planned and executed by humans. Requirements and specifications never cover every edge case unless they've been drawn up in immense detail by qualified architects and designers, in which case the project is already half-done; and even then there's still the possibility of error. Unexpected costs will pop up leading to budget overruns. A subscription expired. A manufacturer discontinued their support for a product you're using and you have to find a new one. An hourly contractor raised his rate under threat of departure. Your entire team just went on strike, demanding a 10% raise and an extra week of vacation. Schedules slip. Unforeseeable problems crop up; that charting component you've been using for 5 straight years isn't compatible with Windows 95, which your client is still using. An obscure bug in 64-bit Windows causes serious UI glitches and you spend nearly a week tracking it down and developing a workaround (this actually happened to me). Your senior developer got hit by a bus and you have to go recruit and train a new one. Your estimated delivery date is always wrong. Always. See Hofstadter's Law : Hofstadter's Law: It always takes longer than you expect, even when you take into account Hofstadter's Law. Agile methods are all about juggling around the cost, schedule, and scope. Most of the time, they're specifically about juggling around the scope and sometimes the schedule , which is why you start with nebulous user stories and plan revisions instead of full versions. Different methodologies use different terminology but it's all the same basic premise: Frequent releases and a rebalancing of the schedule and scope with each release. This makes no sense with a project that is (or claims to be) either fixed scope or fixed schedule. If one project attribute (cost/scope/schedule) were fixed, I would tell you that it might not be a good fit for agile methodologies. If two project attributes are fixed, then your project is definitely not a good fit for agile methodologies. If all three attributes are fixed, then your project is probably going to fail. If it actually ships, then either the original schedule was massively fudged, or the client has managed to delude itself into thinking that you actually delivered what was promised. If this contract is still on the table, I urge you to reject it. And if you've already accepted it, may God have mercy on your soul. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/6045",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/2397/"
]
} |
6,133 | There are some really common usability errors in everyday software we used; errors that result from the ways the particular programmer has learned without learning of all the other ways there are. For example, talking about Windows software in particular, the following common flaws come to mind: Failure to support multiple screens. For example, windows centered in the virtual desktop (instead of a specific screen) and hence displayed spanning the monitor boundary in a dual monitor setup. Failure to support serious keyboard users. For example, utterly messed up tab order; duplicate or completely missing accelerator keys. Alt+Tab order mess-ups. For example, a window that doesn't go to the end of the tab order when minimized. Subtle breakage of common controls that were reimplemented for one reason or another. E.g. failure to implement Ctrl+Left/Right on a textbox; failure to add an Alt+Space window menu to a skinnable window, failure to make Ctrl+Insert copy to clipboard, etc, etc. This one is a huge category in its own right. There are a gazillion of things like this. How can we ever make sure we don't break a large proportion of these? After all they aren't all written down anywhere... or are they? | I would like to pose a counter-question: Can fixed scope + fixed deadline + fixed price contract ever be made to work, period ? The "good/fast/cheap - pick two" saying isn't just some silly engineering joke. Every project manager worth his salt knows about the Project Management Triangle : You're telling us that the cost, scope, and schedule are all fixed. That leaves no room for maneuverability or error. None . You could choose to view "Quality" as an attribute, but it's not a "real" attribute, it's more like a meta-attribute that's derived from the other attributes (cost/scope/schedule). The problem is that this never happens in reality as long as your project is being planned and executed by humans. Requirements and specifications never cover every edge case unless they've been drawn up in immense detail by qualified architects and designers, in which case the project is already half-done; and even then there's still the possibility of error. Unexpected costs will pop up leading to budget overruns. A subscription expired. A manufacturer discontinued their support for a product you're using and you have to find a new one. An hourly contractor raised his rate under threat of departure. Your entire team just went on strike, demanding a 10% raise and an extra week of vacation. Schedules slip. Unforeseeable problems crop up; that charting component you've been using for 5 straight years isn't compatible with Windows 95, which your client is still using. An obscure bug in 64-bit Windows causes serious UI glitches and you spend nearly a week tracking it down and developing a workaround (this actually happened to me). Your senior developer got hit by a bus and you have to go recruit and train a new one. Your estimated delivery date is always wrong. Always. See Hofstadter's Law : Hofstadter's Law: It always takes longer than you expect, even when you take into account Hofstadter's Law. Agile methods are all about juggling around the cost, schedule, and scope. Most of the time, they're specifically about juggling around the scope and sometimes the schedule , which is why you start with nebulous user stories and plan revisions instead of full versions. Different methodologies use different terminology but it's all the same basic premise: Frequent releases and a rebalancing of the schedule and scope with each release. This makes no sense with a project that is (or claims to be) either fixed scope or fixed schedule. If one project attribute (cost/scope/schedule) were fixed, I would tell you that it might not be a good fit for agile methodologies. If two project attributes are fixed, then your project is definitely not a good fit for agile methodologies. If all three attributes are fixed, then your project is probably going to fail. If it actually ships, then either the original schedule was massively fudged, or the client has managed to delude itself into thinking that you actually delivered what was promised. If this contract is still on the table, I urge you to reject it. And if you've already accepted it, may God have mercy on your soul. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/6133",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/3278/"
]
} |
6,166 | To quote Arthur C. Clarke: Any sufficiently advanced technology is indistinguishable from magic. Used to be I looked on technology with wonder and amazement. I wanted to take it apart, understand how it worked, figure it all out. Technology was magical. I'm older, I know more and I spend my days creating stuff that, hopefully, fills other people with that kind of wonder. But lately I've found my own awe for technology has been seriously curtailed. More often I'm just annoyed that it isn't as elegant or seamless or as polished or perfectly delivered as it seemed to be in my youth. It all looks broken and awkward, or cobbled together and poorly tested. Has programming ruined your ability to enjoy technology? Have you stopped wondering in awe and just started saying, "They could have done this better" every time you pick up a bit of technology? | It has ruined my ability to enjoy technology in fiction. I can suspend my disbelief whilst the hero of the [book / film / drama] can withstand numerous karate kicks, fire an infinite number of bullets, leap across a 50ft gap between two buildings, fall from a great height onto a pile of conveniently stacked boxes etc. What makes me shout at the screen in disbelief is when the hero then steps up to a computer, and: performs a search with some application that has more apparent power than google. hacks into a supposedly secure system with a few key presses and a wink. copies the entire hard disk to a memory stick in a matter of seconds with a convenient "% complete" window (which just happens to work with the operating system of the computer he's copying) does anything that involves zooming an image from a CCTV camera to get a high resolution print out of the suspects face. AAAARHG!!!! | {
"source": [
"https://softwareengineering.stackexchange.com/questions/6166",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/762/"
]
} |
6,246 | Most software developers want to keep application logic in the application layer, and it probably feels natural for us to keep it here. Database developers seem to want to put application logic in the database layer, as triggers and stored procedures. Personally I would prefer to keep as much as possible in the application layer to make it easier to debug and keep the responsibilities of the layers separate. What are your thoughts on this, and what should or should not be ok to implement in the database layer? Edit This question is also covered on dba.se , from the DBAs perspective. As programmers.se & dba.se have different audiences and biases, future readers might want to review both sets of answers before deciding what works best for them. | Off the top of my head, advantages of putting logic in the application layer. Testability . This should be a good enough reason on it's own actually. Better code structure . It's very difficult to follow proper OO-architecture with SQL. This usually also makes the code easier to maintain. Easier to code . Due to all the different language features available in whatever language you're using it's usually easier to code in the application layer. Code re-use . It's a whole lot easier to share code with libraries than sharing code in the database. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/6246",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/1020/"
]
} |
6,255 | Joel Spolsky said in one of his famous posts: The single worst strategic mistake
that any software company can make:
rewrite the code from scratch. Chad Fowler wrote: You’ve seen the videos, the weblog
posts and the hype, and you’ve decided
you’re going to re-implement your
product in Rails (or Java, or .NET, or
Erlang, etc.). Beware. This is a longer, harder, more
failure-prone path than you expect. Have you ever been involved in a BIG Rewrite? I'm interested in your experience about this tragic topic, and in particular, in any big rewrite that was completed succesfully (if any). | I've been involved in a few rewrites over my career and they were all disasters.
I think they all fail for the same reasons Vast underestimate of effort required: Every time someone wants a rewrite, it's because the old system is using old technology and difficult to maintain. What they fail to consider is that because of it's age, it may have 30-40 man years of development effort into it. Thinking you can then rewrite the whole thing in 6 months with a team of 5 is silly. Lost knowledge: The old system has been around so long, it does a lot of stuff, and is hooked into everything. There is no up-to-date documentation, and no single point of authority that actually knows all the things the system does. There will be pieces of knowledge with particular users in particular departments, and finding them all is difficult or impossible. Poor Management Decisions: The rewrites I've been involved in had a similar expectations from management: The new system should be 'done', and the old system could simply be turned off on a particular date, period. No other option was acceptable. I think they get this in their head, because they are spending all this money to hire new people for this huge project. In reality, the better risk mitigation strategy is to rewrite the major functions of the old system, say tackle 50-75% of the old system for a first release, and then see how it works! Because of #1 and #2 above, this would probably work out much better, as we find out some of the features that were missed, and what's needed to actually turn off the old system. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/6255",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/74/"
]
} |
6,268 | Just read the question about the Big Rewrites and I remembered a question that I've been wanting answered myself. I have a horrible project passed down to me, written in old Java, using Struts 1.0, tables with inconsistent relationships, or no relationships at all and even tables without primary keys or fields meant to be primary keys but aren't unique at all. Somehow most of the app "just works". Most of the pages are reused (copy pasted code) and hard-coded. Everyone who's ever worked on the project has cursed it in one form or the other. Now I had long considered to propose to upper management a total rewrite of this horrendous application. I'm slowly attempting to on personal time but I really feel that this deserves some dedicated resources to make it happen. Having read the articles on big rewrites I'm having second thoughts. And that's not good when I want to convince my superiors to support my rewrite. (I work in a fairly small company so the proposal has the possibility of being approved) TL;DR
When is a big rewrite the answer and what arguments can you use to support it? | Sorry, this is going to be long, but it's based on personal experience as both architect and developer on multiple rewrite projects. The following conditions should cause you to consider some sort of rewrite. I'll talk about how to decide which one to do after that. Developer ramp-up time is very high. If it takes any longer than below (by experience level) to ramp up a new developer, then the system needs to be redesigned. By ramp-up time, I mean the amount of time before the new developer is ready to do their first commit (on a small feature) Fresh out of college - 1.5 months Still green, but have worked on other projects before - 1 month Mid level - 2 weeks Experienced - 1 week Senior level - 1 day Deployment cannot be automated, because of the complexity of the existing architecture Even simple bug fixes take too long because of the complexity of existing code New features take too long, and cost too much because of the interdependence of the codebase (new features cannot be isolated, and therefore affect existing features) The formal testing cycle takes too long because of the interdependence of the existing codebase. Too many use cases are executed on too few screens. This causes training issues for the users and developers. The technology that the current system is in demands it Quality developers with experience in the technology are too hard to find It is deprecated (It can't be upgraded to support newer platforms/features) There is simply a much more expressive higher-level technology available The cost of maintaining the infrastructure of the older technology is too high These things are pretty self-evident. When to decide on a complete rewrite versus an incremental rebuild is more subjective, and therefore more politically charged. What I can say with conviction is that to categorically state that it is never a good idea is wrong. If a system can be incrementally redesigned, and you have the full support of project sponsorship for such a thing, then you should do it. Here's the problem, though. Many systems cannot be incrementally redesigned. Here are some of the reasons I have encountered that prevent this (both technical and political). Technical The coupling of components is so high that changes to a single component cannot be isolated from other components. A redesign of a single component results in a cascade of changes not only to adjacent components, but indirectly to all components. The technology stack is so complicated that future state design necessitates multiple infrastructure changes. This would be necessary in a complete rewrite as well, but if it's required in an incremental redesign, then you lose that advantage. Redesigning a component results in a complete rewrite of that component anyway, because the existing design is so fubar that there's nothing worth saving. Again, you lose the advantage if this is the case. Political The sponsors cannot be made to understand that an incremental redesign requires a long-term commitment to the project. Inevitably, most organizations lose the appetite for the continuing budget drain that an incremental redesign creates. This loss of appetite is inevitable for a rewrite as well, but the sponsors will be more inclined to continue, because they don't want to be split between a partially complete new system and a partially obsolete old system. The users of the system are too attached with their "current screens." If this is the case, you won't have the license to improve a vital part of the system (the front-end). A redesign lets you circumvent this problem, since they're starting with something new. They'll still insist on getting "the same screens," but you have a little more ammunition to push back. Keep in mind that the total cost of redesiging incrementally is always higher than doing a complete rewrite, but the impact to the organization is usually smaller. In my opinion, if you can justify a rewrite, and you have superstar developers, then do it. Only do it if you can be certain that there is the political will to see it through to completion. This means both executive and end user buy-in. Without it, you will fail. I'm assuming that this is why Joel says it's a bad idea. Executive and end-user buy-in looks like a two-headed unicorn to many architects. You have to sell it aggressively, and campaign for its continuation continuously until it's complete. That's difficult, and you're talking about staking your reputation on something that some will not want to see succeed. Some strategies for success: If you do, however, do not try to convert existing code. Design the system from scratch. Otherwise you're wasting your time. I have never seen or heard of a "conversion" project that didn't end up miserably. Migrate users to the new system one team at a time. Identify the teams that have the MOST pain with the existing system, and migrate them first. Let them spread the good news by word of mouth. This way your new system will be sold from within. Design your framework as you need it. Don't start with some I-spent-6-months-building-this framework that has never seen real code. Keep your technology stack as small as possible. Don't over-design. You can add technologies as needed, but taking them out is difficult. Additionally, the more layers you have, the more work it is for developers to do things. Don't make it difficult from the get-go. Involve the users directly in the design process, but don't let them dictate how to do it. Earn their trust by showing them that you can give them what they want better if you follow good design principles. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/6268",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/1668/"
]
} |
6,395 | What tools and techniques do you use for exploring and learning an unknown code base? I am thinking of tools like grep , ctags , unit-tests, functional test, class-diagram generators, call graphs, code metrics like sloccount , and so on. I'd be interested in your experiences, the helpers you used or wrote yourself and the size of the code base with which you worked. I realize that becoming acquainted with a code base is a process that happens over time, and familiarity can mean anything from "I'm able to summarize the code" to "I can refactor and shrink it to 30% of the size". But how to even begin? | what I've always done is the following: Open multiple copies of my editor (Visual Studio/Eclipse/Whatever) and then debug and do line breaks step through the code. Find out the flow of the code, stack trace through to see where the key points are and go from there. I can look at method after method - but it's nice if I can click on something and then see where in the code it's executed and follow along. Let's me get a feel for how the developer wanted things to work. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/6395",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/436/"
]
} |
6,587 | Recently reading the question What languages do you use without an IDE? One question asked in a few answers was "is Notepad++ and IDE?" One answers to the original question said "None, I use vim...", implying that vim is an IDE. But then another answer suggested vim isn't an IDE. So where is the line? What about notepad, ed, or nano? Is the only non-IDE coding technique the butterfly technique ? | Taken literally, IDE = Integrated Development Environment. This is the way i look at it: Integrated: Means you can code / launch / compile / debug your app from the tool. Development: Means it can group files into projects, and does syntax highlighting for your language, maybe has refactoring tools, ability to generate files from templates (like unit test files, class files etc.), auto complete / intellisense Environment: Means both of the above are available from the same tool Notepad++ allows for development (eg. you can write code), but the other areas of development are not covered. I've never used notepad++ for development, only for occasionally editing files. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/6587",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/1069/"
]
} |
6,591 | I'm trying to understand how I should evaluate writing a book, article, or conference presentation. Writing a book is a lot of work. Same for writing an article in a magazine or presenting in a conference. They need time and you may even make a mistake here and there that back fires (bad reviews, people calling you an idiot..) Also you do it for free (at least for magazine articles and conference presentations. For books you get something like a $5K deposit and you rarely get any additional sales royalties after that). So how should I evaluate the benefits? I would appreciate answers that call out if you have done this before. I may not write a book because it's way beyond what I'd like to commit time-wise, but should I bother giving conference presentations or writing shorter articles in magazines? | It all depends: what are your goals? ** [Note: my background is as a programmer, but I've been making a living as a tech writer /speaker for the last 12 years. After 15 titles, dozens of magazine articles, and speaking internationally, I think I'm at least as qualified as anyone else here.] ** If your goal is to make money, don't bother. Really. I know a lot of people in this business, and very few make a decent hourly wage from writing. Of the ones who do make a living at it, all of them write for beginners (tip: there are always more beginners than intermediate or advanced users). However… IF you're currently working as a consultant and if you want more consulting gigs with bigger companies at a higher price and if you've been offered a book contract and/or speaking gigs … then go for it. Don't think of it in terms of work with low compensation; instead, think of it as just part of the training and prep you already do in order to get those consulting jobs. Screw writing articles for magazines/sites that don't pay — or say you'll write for them, on the condition that they run your article without ads. If they're making money, you should be too. However, if the magazine helps you get those high-profile consulting gigs, see the advice in the previous paragraph. ** Speaking gigs, though, are almost always worth it. At a minimum, you'll meet other presenters, which is how I've met some truly amazing people . Networking opportunities abound. ** On the other hand… IF you have an amazing idea for a great book that no one else has written and if you can't rest until you see that book in print … then go for it. In this case, it's about love, not money. If you can handle a life where this book doesn't exist, then don't write it. ** But it's really all about where you want your career to go. If a book helps you get to that place, then see if works for you. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/6591",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/3388/"
]
} |
7,055 | Which design pattern do you think is the most popular? | I'm pretty sure the most common is 'The Big Ball of Mud'. Unfortunately for us all. http://en.wikipedia.org/wiki/Big_ball_of_mud | {
"source": [
"https://softwareengineering.stackexchange.com/questions/7055",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/666/"
]
} |
7,126 | Note : this question is an edited excerpt from a blog posting I wrote a few months ago. After placing a link to the blog in a comment on Programmers.SE someone requested that I post a question here so that they could answer it. This posting is my most popular, as people seem to type "I don't get object-oriented programming" into Google a lot . Feel free to answer here, or in a comment at Wordpress. What is object-oriented programming?
No one has given me a satisfactory
answer. I feel like you will not get
a good definition from someone who
goes around saying “object” and
“object-oriented” with his nose in the
air. Nor will you get a good
definition from someone who has done
nothing but object-oriented
programming. No one who understands
both procedural and object-oriented
programming has ever given me a
consistent idea of what an
object-oriented program actually does. Can someone please give me their ideas of the advantages of object-oriented programming? | From your blog, it seems that you're familiar with both imperative and functional programming, and that you're familiar with the basic concepts involved in object-oriented programming, but you've just never really had it "click" as to what makes it useful. I'll try to explain in terms of that knowledge, and hope that it's helpful to you. At its core, OOP is a way to use the imperative paradigm to better manage high degrees of complexity by creating "smart" data structures that model the problem domain. In a (standard procedural non-object-oriented) program, you've got two basic things: variables, and code that knows what to do with them. The code takes input from the user and various other sources, stores it in variables, operates on it, and produces output data which goes to the user or various other locations. Object-oriented programming is a way to simplify your program by taking that basic pattern and repeating it on a smaller scale. Just like a program is a large collection of data with code that knows what to do with it, each object is a small piece of data bound to code that knows what to do with it. By breaking down the problem domain into smaller pieces and making sure as much data as possible is bound directly to code that knows what to do with it, you make it a lot easier to reason about the process as a whole and also about the sub-issues that make up the process. By grouping data into object classes, you can centralize code related to that data, making relevant code easier both to find and to debug. And by encapsulating the data behind access specifiers and only accessing it through methods, (or properties, if your language supports them,) you greatly reduce the potential for data corruption or the violation of invariants. And by using inheritance and polymorphism, you can reuse preexisting classes, customizing them to fit your specific needs, without having to either modify the originals or rewrite everything from the ground up. (Which is a thing you should never do , if you can avoid it.) Just be careful you understand your base object, or you could end up with killer kangaroos . To me, these are the fundamental principles of object-oriented programming: complexity management, code centralization and improved problem-domain modeling through the creation of object classes, inheritance and polymorphism, and increased safety without sacrificing power or control through the use of encapsulation and properties. I hope this helps you understand why so many programmers find it useful. EDIT: In response to Joel's question in the comments, Can you explain what an "object-oriented program" contains
(other than these fancy defintions you've outlined) that is fundamentally
different from an imperative program? How do you "get the ball rolling?" A little disclaimer here. My model of "an object-oriented program" is basically the Delphi model, which is very similar to the C#/.NET model since they were created by former Delphi team members. What I'm saying here may not apply, or not apply as much, in other OO languages. An object-oriented program is one in which all the logic is structured around objects. Of course this has to be bootstrapped somewhere. Your typical Delphi program contains initialization code that creates a singleton object called Application . At the start of the program, it calls Application.Initialize , then a call to Application.CreateForm for every form you want to load into memory from the beginning, and then Application.Run, which displays the main form on screen and starts up the input/event loop that forms the core of any interactive computer programs. Application and your forms poll for incoming events from the OS and translate them into method calls on your object. One thing that's very common is the use of event handlers, or "delegates" in .NET-speak. An object has a method that says, "do X and Y, but also check to see if this particular event handler is assigned, and call it if it is." An event handler is a method pointer--a very simple closure that contains a reference to the method and a reference to the object instance--that's used to extend the behavior of objects. For example, if I have a button object on my form, I customize its behavior by attaching an OnClick event handler, which causes some other object to execute a method when the button is clicked. So in an object-oriented program, most of the work gets done by defining objects with certain responsibilities and linking them together, either through method pointers or by one object directly calling a method defined in another object's public interface. (And now we're back to encapsulation.) This is an idea that I had no concept of back before I took OOP classes in college. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/7126",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/1935/"
]
} |
7,166 | Thoughts on these? Python is one example, and no this is not a stab against Python I like the language. What languages have a indentation requirement? | In Makefiles, it's annoying. In python, I find it very apropos and it makes the syntax a lot cleaner. I think the thing that makes it better in python is that no special characters are required, the only requirement is that you be consistent. You should be doing it anyway, so you get no cost added by following it. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/7166",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/136/"
]
} |
7,217 | Most programmers defending methodologies politically correct like Agile, Waterfall, RUP, etc. Some of them follow the methodology but not all of them. Frankly, if you can choose the methodology, you certainly would go to mainstream "correct" methodologies or you would prefer the "easier" methodology like cowboy programming? Why? I know it depends. Please, explain when you would use one or another. Please, say what advantages do you see on Cowboy coding. See about Cowboy coding on Wikipedia | I think almost every experienced programmer has gone through three stages and some go through four: Cowboy coders or nuggets know little to nothing about design and view it as an unnecessary formality. If working on small projects for non-technical stakeholders, this attitude may serve them well for a while; it Gets Things Done, it impresses the boss, makes the programmer feel good about himself and confirms the idea that he knows what he's doing (even though he doesn't). Architecture Astronauts have witnessed the failures of their first ball-of-yarn projects to adapt to changing circumstances. Everything must be rewritten and to prevent the need for another rewrite in the future, they create inner platforms , and end up spending 4 hours a day on support because nobody else understands how to use them properly. Quasi-engineers often mistake themselves for actual , trained engineers because they are genuinely competent and understand some engineering principles. They're aware of the underlying engineering and business concepts: Risk, ROI, UX, performance, maintainability, and so on. These people see design and documentation as a continuum and are usually able to adapt the level of architecture/design to the project requirements. At this point, many fall in love with methodologies, whether they be Agile, Waterfall, RUP, etc. They start believing in the absolute infallibility and even necessity of these methodologies without realizing that in the actual software engineering field, they're merely tools, not religions. And unfortunately, it prevents them from ever getting to the final stage, which is: Duct tape programmers AKA gurus or highly-paid consultants know what architecture and design they're going to use within five minutes after hearing the project requirements. All of the architecture and design work is still happening, but it's on an intuitive level and happening so fast that an untrained observer would mistake it for cowboy coding - and many do . Generally these people are all about creating a product that's "good enough" and so their works may be a little under-engineered but they are miles away from the spaghetti code produced by cowboy coders. Nuggets cannot even identify these people when they're told about them , because to them, everything that is happening in the background just doesn't exist. Some of you will probably be thinking to yourselves at this point that I haven't answered the question. That's because the question itself is flawed. Cowboy coding isn't a choice , it's a skill level , and you can't choose to be a cowboy coder any more than you can choose to be illiterate. If you are a cowboy coder, then you know no other way. If you've become an architecture astronaut, you are physically and psychologically incapable of producing software with no design. If you are a quasi-engineer (or a professional engineer), then completing a project with little or no up-front design effort is a conscious choice (usually due to absurd deadlines) that has to be weighed against the obvious risks, and undertaken only after the stakeholders have agreed to them (usually in writing). And if you are a duct-tape programmer, then there is never any reason to "cowboy code" because you can build a quality product just as quickly. Nobody "prefers" cowboy coding over other methodologies because it isn't a methodology. It's the software development equivalent of mashing buttons in a video game. It's OK for the beginner levels but anybody who's moved past that stage simply won't do it. They might do something that looks similar but it will not be the same thing. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/7217",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/389/"
]
} |
7,242 | Who here is learning Go? Are other companies looking at using it? Is it likely to become widely used? | When it comes to programming languages, the old adage, "it's not who you are, it's who you know" definitely holds true. C and C++ were sponsored by AT&T, Java was brought to us by Sun, the .NET family came out of Microsoft, and all of them got very popular very quickly. Then we have Objective-C and Python, which were around for quite a while and stayed really obscure until they were discovered and hyped up by Apple and Google, respectively, and then suddenly they really took off. But languages without a major sponsor tend to languish in obscurity, no matter how good they are. Go is sponsored by Google. It's not difficult to arrive at the right conclusion here. Give it five years and it's gonna be huge. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/7242",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/3111/"
]
} |
7,305 | What is your favorite method to declare a pointer? int* i; or int *i; or int * i; or int*i; Please explain why. see also: http://www.stroustrup.com/bs_faq2.html#whitespace | I prefer int* i because i has the type "pointer to an int", and I feel this makes it uniform with the type system. Of course, the well-known behavior comes in, when trying to define multiple pointers on one line (namely, the asterisk need to be put before each variable name to declare a pointer), but I simply don't declare pointers this way. Also, I think it's a severe defect in C-style languages. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/7305",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/116/"
]
} |
7,347 | I'm talking about the way we write simple routines in order to improve performance without making your code harder to read... for instance, this is the typical for we learned: for(int i = 0; i < collection.length(); i++ ){
// stuff here
} But, I usually do this when a foreach is not applicable: for(int i = 0, j = collection.length(); i < j; i++ ){
// stuff here
} I think this is a better approach since it will call the length method once only... my girlfriend says it's cryptic though. Is there any other simple trick you use on your own developments? | insert premature-discussion-is-the-root-of-all-evil lecture That said, here are some habits I've gotten into to avoid unnecessary efficiency, and in some cases, make my code simpler and more correct as well. This isn't a discussion of general principles, but of some things to be aware of to avoid introducing unnecessary inefficiencies into code. Know your big-O This should probably be merged into the lengthy discussion above. It's pretty much common sense that a loop inside of a loop, where the inner loop repeats a calculation, is gonna be slower. For example: for (i = 0; i < strlen(str); i++) {
...
} This will take a horrendous amount of time if the string is really long, because the length is being recalculated on every iteration of the loop. Note that GCC actually optimizes this case because strlen() is marked as a pure function. When sorting a million 32-bit integers, bubble sort would be the wrong way to go . In general, sorting can be done in O(n * log n) time (or better, in the case of radix sort), so unless you know your data is going to be small, look for an algorithm that's at least O(n * log n). Likewise, when dealing with databases, be aware of indexes. If you SELECT * FROM people WHERE age = 20 , and you don't have an index on people(age), it'll require an O(n) sequential scan rather than a much faster O(log n) index scan. Integer arithmetic hierarchy When programming in C, bear in mind that some arithmetic operations are more expensive than others. For integers, the hierarchy goes something like this (least expensive first): + - ~ & | ^ << >> * / Granted, the compiler will usually optimize things like n / 2 to n >> 1 automatically if you're targeting a mainstream computer, but if you're targeting an embedded device, you might not get that luxury. Also, % 2 and & 1 have different semantics. Division and modulus usually rounds toward zero, but it's implementation defined. Good ol' >> and & always rounds toward negative infinity, which (in my opinion) makes a lot more sense. For instance, on my computer: printf("%d\n", -1 % 2); // -1 (maybe)
printf("%d\n", -1 & 1); // 1 Hence, use what makes sense. Don't think you're being a good boy by using % 2 when you were originally going to write & 1 . Expensive floating point operations Avoid heavy floating point operations like pow() and log() in code that doesn't really need them, especially when dealing with integers. Take, for example, reading a number: int parseInt(const char *str)
{
const char *p;
int digits;
int number;
int position;
// Count the number of digits
for (p = str; isdigit(*p); p++)
{}
digits = p - str;
// Sum the digits, multiplying them by their respective power of 10.
number = 0;
position = digits - 1;
for (p = str; isdigit(*p); p++, position--)
number += (*p - '0') * pow(10, position);
return number;
} Not only is this use of pow() (and the int <-> double conversions needed to use it) rather expensive, but it creates an opportunity for precision loss (incidentally, the code above doesn't have precision issues). That's why I wince when I see this type of function used in a non-mathematical context. Also, notice how the "clever" algorithm below, which multiplies by 10 on each iteration, is actually more concise than the code above: int parseInt(const char *str)
{
const char *p;
int number;
number = 0;
for (p = str; isdigit(*p); p++) {
number *= 10;
number += *p - '0';
}
return number;
} | {
"source": [
"https://softwareengineering.stackexchange.com/questions/7347",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/1773/"
]
} |
7,472 | Let's say a large corporation is planning to replace it's existing version control system. Let's say it is only considering systems from major vendors that cost hundreds of thousands of dollars because they have "support." Does version control in an enterprisey environment have to be expensive? Does your medium/large corporation use a FOSS VCS such as SVN/Git/Mercurial? What has the experience been? I have to think it doesn't need to be expensive since there are so many free options, and there are probably companies that provide paid support for FOSS VCS if that is the main concern. I don't intend this question to compare VCS or decide which is best, rather just understand experiences with VCS in a corporate IT environment. | Yes. In my (admittedly limited) experience, the non-FOSS solutions tend to be more "enterprise-y". That is, They integrate with everything under the sun. They have more built-in controls for complex business logic (permissions, access control, approval, etc). They come with support contracts and reasonably responsive tech support lines. They're well advertised to the non-technical people making VCS decisions at a high level in big companies. These attributes make them attractive to large companies, especially to people who don't have to use them. The FOSS alternatives, as counters to the above: Have plenty of third-party tools to integrate them with everything under the sun (by virtue of being more popular than proprietary alternatives), and tend to be easier to develop third-party tools for, being OS. See previous- easier to to get external tools around a clean, simple, basic tool. By virtue of being more popular, they have a wider community-based support. They don't need said advertising. Aside from that, my experience with common free VCS (mercurial/svn/etc) has them being faster, more reliable, and easier to use. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/7472",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/2164/"
]
} |
7,482 | As per this question: I decided to implement the BitTorrent spec to make my own client/tracker. Now, I was going through the spec, I was about 70% done implementing the BEncoding when I found a link to an implementation of BEncoding in C# written by someone else. Normally, if I were working on production code, I'd use it as a reference to check my own work against, and a baseline to write some tests to run my code against, but I found myself thinking "I'm making this, it's a for-fun project with no deadlines; I should really implement it myself - I could learn a lot" but some voice in my head was saying "Why bother re-inventing the wheel? Take the code, work it so that it's you're style/naming convention and you're done." So I'm a bit conflicted. I ended up doing the latter, and some parts of it I found better than what I had written, but I almost feel like I 'cheated'. What's your take? Is it cheating myself? Perfectly normal? A missed opportunity to learn on my own? A good opportunity to have learned from someone else's example? | If I have seen further it is by standing on the shoulders of giants. Isaac Newton It is not cheating if the code is open source and you've taken the time to understand it. Now obviously this isn't always possible due to time constraints but try and always have high level overview of the code you are using. Always remember that C was derived from B. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/7482",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/1554/"
]
} |
7,551 | Why are there so many programming languages? And what prompts someone to create a programming languages in spite of the fact that other languages already exist? | Programming languages evolve New programming languages often learn from existing languages and add, remove and combine features in a new way. There is a few different paradigms like object oriented and functional and many modern languages try to mix features from them both. There is also new problems that needs to be solved, e.g. the increase of multi-core CPUs. The most common solution to that have been threads, but some programming languages try to solve the concurrency problem in a different way e.g. the Actor Model. See Erlang - Software for a Concurrent World | {
"source": [
"https://softwareengineering.stackexchange.com/questions/7551",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/67/"
]
} |
7,566 | Developer interviews are different than most other places, because not only do you worry about the people you work with, benefits, job description, etc., you also have EXTRA to worry about after getting the job. Writing two application with exact requirements can be vastly different if you're working in a loud vs quiet environment, using VS2003/.NET 2.0 vs VS210/.NET 4.0., using SVN vs VSS. Is it ok to give the potential employer the straight-up Joel Test? I try to ask as many questions to get the type of environment I will be working in, which is extremely important from my perspective, but what's the best way to cut to the chase and just ask the tough questions (like they ask you during the same interview). NOTE: By the "Joel Test" I mean a specific list of things that are deal breakers that are important to you (not necessarily Joel), but you may not have time to get out using the traditional casual "conversational" way of asking them, so you decide to either email or schedule another meeting, or ask other people, etc. | A job interview goes both ways -- a company is interviewing you and you are interviewing the company. I wouldn't come out literally with a "what's your Joel Test score?", but I would ask the individual questions that were particular deal-breakers for me in a work environment. It doesn't need a huge build-up. A good time to ask these questions as at the technical part of the interview process, when they say "do you have any questions for us?". You can lead in with something along the lines of "can you describe a typical day on the job here?" and go from there. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/7566",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/1521/"
]
} |
7,581 | Is Java becoming the de facto standard from Linux application development in the same way .NET is the standard for Windows application development? If not why not? | In short: No. It really depends on what sort of application you are writing. For many the answer is still regular old C/C++ (if doing, say Qt or GTK+ GUI development). Many doing GTK+ development may also be using Python + PyGTK. If doing web or web services development, you see lots of Ruby, Python, PHP, and Java. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/7581",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/2527/"
]
} |
7,618 | Perhaps the greatest promise of using object-oriented paradigm is the code reuse. Some dispute that this was achieved. Why was it (not) achieved? Does code reuse as OOP defines it, make projects more productive? Or more manageable? Or easier to maintain? Or with more quality? Probably we all agree that code reuse is a good thing, but there are several ways to achieve this goal. The question is about the method of code reuse offered by OOP. Was it a good thing? Are there better methods to achieved code reuse than object orientation, sub-classing, polymorphism, etc.? What ways are better? Why ? Tell us your experience with OOP reuse or other paradigms reuse. | Code reuse is a pretty good idea. Not a great one . I have a perspective drawn from about 30 years of software engineering, trying to "reuse". I started investigating "code reuse" as a research topic back in the 80s, after discovering I had reused the design of one OS I built in the early 70s, for another OS I built in the late 70s. The good part of code reuse is the ability to sometimes reuse honest-to-god preexisting code. But the world is full of code; how can find what you want? Here's what I call the Reuse Curse : I'm Santa Claus (ok Open Source), and I have a bag of 1 billion software components. You can have any one of them. Good luck choosing. To solve the reuse problem well: the reuser needs to somehow specify what he needs (functionaly, performance, target language, environment assumptions, ...) there must be a library of "reusable" code that has been indexed in various ways by these potential criteria some mechanism must exist to pick out candidate elements (at a billion elements, you can't look at them all personally) there needs to be a way to characterizer how far away from the specification the chosen candidates are some regular process should exist to allow the reuser to modify the chosen reusable code
(here is OOP's greatest contribution: you can edit an existing component/object by overriding its slots. OOP doesn't provide any other help). all this must clearly be cheaper than simply recoding it Mostly what has been discovered over the years is that for code to be reusable, it sort of has to be designed for that purpose, or it contains too many implicit assumptions. The most successful code reuse libraries have actually been pretty small. Arguably libraries and frameworks are "reusable" code and they are extremely successful; Java and C# succeed not because they are pretty good computer languages, but rather because they have huge well-designed, implemented and documented libraries available. But people don't look at the source code in the libraries; they simply call a well-documented API (designed to be generally usable). What code reuse hasn't done (OOP neither) is provide orders of magnitude improvement in our ability to code systems. I think the key flaw is that any kind of code reuse is fundamentally limited because code has too many assumptions built in . If you make the code tiny, you minimize assumptions, but then the cost to build from scratch isn't very big and the reuse gains are not effective. If you make the code chunks huge, they're pretty much useless in a new context. Like Gulliver, they are tied to the beach by a million tiny strings, and you simply can't afford to cut them all. What we should be working on is reuse of knowledge to construct code . If we can do this, then we can apply that knowledge to construct code that we need, handling the current set of assumptions. To do this, one still needs the same specification capability to characterize software components (you still have to say what you want!). But then you apply this "construction" knowledge to the specifications to generate the code you want. As a community, we aren't very good at this yet. But people do it all the time; why can't we automate it? There is a lot of research, and this shows it can be done in many circumstances. One key piece of machinery needed for this are mechanical tools for accepting "component descriptions" (these are just formal documents and can be parsed like programming languages) and apply program transformations to them. Compilers already do this :-} And they are really good at the class of problem they tackle. UML models with code generation are one attempt to do this. Not a very good attempt; pretty much what one says in most UML models is "I have data that looks like this". Pretty hard to generate a real program if the functionality is left out. I'm trying to build practical program transformation systems, a tool called DMS . Been pretty well distracted by applying program transformations not so much to abstract specifications to generate code, but rather to legacy code to clean it up. (These are the same problem in the abstract!). (To build such tools takes a lot of time; I've been doing this for 15 years and in the mean time you have to eat). But DMS has the two key properties I described above: the ability to process arbitrary formal specifications, and the ability to capture "code generation knowledge" as transforms, and apply them on demand. And remarkably, we do generate in some special cases, some rather interesting code from specifications; DMS is largely built using itself to generate its implementation. That has achieved for us at least some of the promise of (knowledge) reuse: extremely significant productivity gains. I have a team of about 7 technical people; we've written probably 1-2 MSLOC of "specifications" for DMS, but have some 10MSLOC of generated code. Summary: reuse of generation knowledge is the win, not reuse of code . | {
"source": [
"https://softwareengineering.stackexchange.com/questions/7618",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/389/"
]
} |
7,629 | What coding standards do you think are important for .NET / C# projects? This could be anything from dealing with curly braces and spacing and pedantry like that. Or it could be more fundamental questions such as what namespaces in the .NET Framework to avoid, best practices with config files, etc. Try to avoid creating a post that is simply the corollary to another. For example, it would be fine to have one post focusing on curly braces. We don't need two to support one style vs. the other. The idea is not to vote for your pet standard, but rather to flesh out what should be thought about when creating standards. | Here is the official Microsoft Guide on coding standards for the .NET framework Version 4.0. If you want the older version for 1.1, try here . I don't necessarily follow this to a 'T', as they say. However, when in doubt, this is the best place to start to be consistent with the current .NET framework, which makes it easier on everyone, no matter if they're new to your particular project or not. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/7629",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/3124/"
]
} |
7,686 | I am curious about experiences of programmers who have gone beyond college or university and now work in the industry. I am not talking about academia (you need PhD there anyway). Do you have a Master's degree? Has it helped your career? Are there any other benefits besides the knowledge one gains while pursuing the degree? | Yes it does. It helps a lot in getting your resume shortlisted by the HR who have no idea what programming is all about. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/7686",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/3262/"
]
} |
7,720 | I'm looking at licensing some open source software and am looking at the GPL. What are the pros and cons of using this license? | Ok, my list of pros and cons of GPL: Pros It makes people think hard about whether they really buy into Open Source; are you prepared to live by it, and let other people use what you've written, rather than just liking it because of what you can get out of it? It makes sure that when something has been developed by the Open Source community, it stays Open Source; no chance of someone taking all the work that others have been doing, repackaging it and selling it on. Cons It's a complete no-no for most corporate organisations; they can't afford the risk of GPL-licenced code getting into their products, so virtually all medium-large companies have clauses explicitly banning GPL-licenced code. It puts people off Open Source. Is it really fair, that because I use your Open-Source image picker control in my app, my whole app must now be Open Source too? Even if I improved the image picker and contributed that code back to the community? The terms are too onerous for many developers. Lots of people aren't aware of the stringent terms of GPL, so use it as it's the licence they've heard of without realising what restrictions they're placing on anyone else that wants to use it. Its extremely viral. If your project contains a component that contains a component that contains a component that is under the GPL (phew!), your whole project is subject to the GPL too. Ultimately for me the cons outweigh the pros. To me it smacks of Open Source Evangelists trying to trick the world into going Open Source instead of persuading the world of its benefits. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/7720",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/175/"
]
} |
7,823 | In the comment to this great post , Roy Osherove mentioned the OAPT project that is designed to run each assert in a single test. The following is written on the project's home page: Proper unit tests should fail for
exactly one reason, that’s why you
should be using one assert per unit
test. And, also, Roy wrote in comments: My guideline is usually that you test
one logical CONCEPT per test. you can
have multiple asserts on the same object . they will usually be the same concept being tested. I think that, there are some cases where multiple assertions are needed (e.g. Guard Assertion ), but in general I try to avoid this. What is your opinion? Please provide a real world example where multiple asserts are really needed . | I don't think it's necessarily a bad thing , but I do think we should strive towards only having single asserts in our tests. This means you write a lot more tests and our tests would end up testing only one thing at a time. Having said that, I would say maybe half of my tests actually only have one assert. I think it only becomes a code (test?) smell when you have about five or more asserts in your test. How do you solve multiple asserts? | {
"source": [
"https://softwareengineering.stackexchange.com/questions/7823",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/1716/"
]
} |
7,861 | I know we've covered what questions you should ask about a company before you would decide to work there . But what do you do with the answers? In other words, what would you consider a dealbreaker? I.e. what would scare you so much about a company that you wouldn't work there, even if everything else was great? For example, if they tell me they don't use version control, I wouldn't work there. End of story. | Companies that feel the need to mention up-front that unpaid (for salaried employees) overtime is required 100% of the time. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/7861",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/221/"
]
} |
7,927 | No one's perfect, and no matter what we do, we are going to produce code that has bugs in it from time to time. What are some methods/techniques for reducing the number of bugs you produce, both when writing new software and changing/maintaining existing code? | Avoid fancy coding. The more complicated the code, the more likely there's bugs. Usually on modern systems, clearly written code will be fast and small enough. Use available libraries. The easiest way to not have bugs writing a utility routine is to not write it. Learn a few formal techniques for the more complicated stuff. If there's complicated conditions, nail them down with pen and paper. Ideally, know some proof techniques. If I can prove code correct, it's almost always good except for big, dumb, obvious bugs that are easy to fix. Obviously, this only goes so far, but sometimes you can formally reason about small but complicated things. For existing code, learn how to refactor: how to make small changes in the code, often using an automated tool, that make the code more readable without changing the behavior. Don't do anything too quickly. Taking a little time up front to do things right, to check what you've done, and to think about what you're doing can pay off big time later. Once you've written the code, use what you've got to make it good. Unit tests are great. You can often write tests ahead of time, which can be great feedback (if done consistently, this is test-driven development). Compile with warning options, and pay attention to the warnings. Get somebody else to look at the code. Formal code reviews are good, but they may not be at a convenient time. Pull requests, or similar if your scm doesn't support them allow for asynchronous reviews. Buddy checking can be a less formal review. Pair programming ensures two pairs of eyes look at everything. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/7927",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/190/"
]
} |
8,055 | If I would start to focus on the .NET platform and be self-employed, then I probably would like to have some Windows 7, Windows Server 2008, Visual Studio 2010 licenses just for the development environment and for testing, and then a few licenses for the production environment (a Windows Server 2008 Web) and added to that upgrades when new versions is available. This will end up in a quite big amount of money. Is there any kind of bundle discount that I can get from Microsoft in such a case? And what is the requirement to be able to get that discount? | How about a 100% discount? If you are making software you intend to sell, you qualify for BizSpark , which gives all your developers MSDN subscriptions. If you intend instead to offer your services, you don't qualify for BizSpark, but you still don't need to buy separate licenses for dev, staging etc. You can get an MSDN subscription, which covers one developer across any number of machines other than production. You don't install dev tools on production, and your clients are responsible for the Windows, SQL etc licenses they need. It is generally useful to join the partner program. The Registered level is free and lets you buy an MSDN subscription at a dramatically reduced price, 80-90% off or so. The program names vary over time - Empower, Action Pack, etc so you would need to check the partner program to be sure what they are and what they cost at the moment. Finally, back to the free angle, don't rule out Visual Studio Express, SQL Express etc - absolutely no cost ever and almost all the features of the full products. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/8055",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/18/"
]
} |
8,104 | So I know everyone here is all about private offices, how many developers actually have them. I am sort of half skeptical. I can believe that lead developers have them, but that's normally just one person in your average office. That makes me wonder, how many developers have private offices. Which leads to the actual question: why should they have them? | In the management world, where concentration on a task is not an issue, offices are a means to represent status. They think "private office == more status, big private office == even more status, etc." What most people fail to understand: Every time our concentration is broken, we create at least one bug and/or delay the deadline for another half-hour. Private offices is not a "nice to have" for developers but a must. This is not about status, this is about brain physics. Working in an open space costs at least 30% productivity (I read that in a newspaper, start with this blog post if you want to know more). Worst part: This goes unnoticed. If you always work in such an environment, you'll never notice that it happens! Until you wonder why your neck is stiff, you feel tense/nervous all the time, etc. If you want another productivity increase, take the telephones away, too. Unless you're doing production support, the next day is always soon enough. To relax the team, supply free soft drinks. That costs $100-300/month for a team of 10 and makes sure they take regular breaks, drink enough (so they don't dehydrate). The funny thing is: These aren't a bunch of myths but hard facts. Still, most companies ignore these simple, cheap ways to boost productivity. Well, except for the successful ones, of course (Google, Microsoft, etc). See also: Open Offices Reduce Productivity and Increase Stress The High Cost of Interruptions A study on unplanned interruptions in software development How to explain a layperson why a developer should not be interrupted while neck-deep in coding? | {
"source": [
"https://softwareengineering.stackexchange.com/questions/8104",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/3792/"
]
} |
8,111 | I'm a self-taught programmer and have just started a computer science degree to supplement my knowledge and fill in the gaps. However, I'm already debating the direction of my education. I want a 4 year bachelor's degree for sure, but after that, I'm not sure: Is it worth it to get an M.Sc? What about a Ph.D.? What opportunities do these degrees open up? Currently my goal is to be a software developer, but there are a number of fascinating fields in the software industry and I'm certainly interested in investigating many of them. It is in this respect that I think a higher degree may be worth it -- even if it wouldn't necessarily help a career of software development. So will it be worth it? Will grad school open doors? | Getting a PhD does two things to you and it uses up 4 or more years. You will need to decide whether those two things are worth the time. First, it gives you some initials after your name. For the rest of time, people who see those initials will think "wow, you must be really smart!" (and often, they will say it out loud.) On a resume it will generally help you, though in some circumstances it might hurt you, with people thinking you're overqualified or an egghead. Second, and more importantly in my opinion, is the changes in your brain and your attitude that happen over the course of the degree. You will end up knowing more about some small part of the world than any other person. You will stand in front of 3 or 4 people who are experts, sometimes world-renowned experts, and teach them about your work. When you walk in, those experts will be supervisor, examiner, "bosses" of a sort and when you walk out they will be your peers. You will learn a lot about one corner of computer science and a lot more about yourself and your capabilities. You will be confident being "the expert" when required. And that changes everything. Unless you know now you want to be a prof, or to join a particular research lab, it seems unusual to me that you could decide about the PhD before doing the undergrad work. Go do the undergrad, and make friends with some grad students. They will tell you all you want to know about the life of a grad student, the life of a baby prof, the job prospects when you're out, and more. I don't regret the time I spent on my PhD. It has opened many doors for me. It has made me more memorable ("right, you're the woman with the PhD, aren't you?") and been an asset to some of my teams independent of my abilities or knowledge ("we have a PhD on our team!"). But the industry I joined then is not the industry you're joining now. Talk to people who are in the thick of it now and then make a decision as you finish the undergrad work. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/8111",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/54/"
]
} |
8,187 | A recent question on stackoverflow provoked a discussion about the immutability of primary keys. I had thought that it was a kind of rule that primary keys should be immutable. If there is a chance that some day a primary key would be updated, I thought you should use a surrogate key. However it is not in the SQL standard and some RDBMS' "cascade update" feature allows a primary key to change. So my question is: is it still a bad practice to have a primary key that may change ? What are the cons, if any, of having a mutable primary key ? | You only need the primary key to be immutable if it's linked to a foreign key, or if it's used as an identifier outside the database (for example in an URL pointing to a page for the item). On the other hand, you only need to have a mutable key if it carries some information that might change. I always use a surrogate key if the record doesn't have a simple, immutable identifier that can be used as key. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/8187",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/4017/"
]
} |
8,283 | What was a project or spec that got put on your desk that could not possibly be done? How did you explain the dilemma to the "requester"? More importantly, did they understand after you explained the fundamental issue? | I was told to make the printer print faster. Serious, and I was written up for failing. The boss wasn't very tech savvy and didn't understand why I couldn't speed it up. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/8283",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/3792/"
]
} |
8,301 | In my experience, software developers tend to wear multiple hats and fill multiple roles with different responsibilities. From not only coding, but sometimes also writing SQL, designing the user-interface, designing the database, graphics manipulation, to even QA testing. If the primary role is to write software/code, what roles should the developer not take on? Are there any? The intention of this question is not because a developer is incapable of filling another role-- but having the additional role actually works against the primary role , or should really be a dedicated role of someone who does not primarily program. | Sysadmin. Developing software and handling the IT infrastructure are two different skillsets that look similar to an outsider. (It's all just banging on computers, right?) For a smallish company, the temptation will be very strong to make The Computer Guy responsible for all the machines in the office. If you have the skills to actually wear both hats, awesome; but it's one of those things that can be a much greater time sink than people realize, and if you're self-teaching as you go, chances are you're not doing it very well. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/8301",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/14/"
]
} |
8,588 | SQL is officially pronounced as /ˌɛskjuːˈɛl/ like "S-Q-L", as stated in Beaulieu, Alan (April 2009). Mary E. Treseler. ed. Learning SQL (2nd ed.). Sebastapol, CA, USA: O'Reilly. ISBN 978-0-596-52083-0. But often it is pronounced /ˈsiːkwəl/ like "sequel", what is the history behind this second pronunciation? | SEQUEL = Structured English QUEry Language. For a good historical perspective read Don Chamberlin: ...A bunch of things were happening at about this time that I think we ought to mention just in passing. One was that we had to change the name of our language from SEQUEL to SQL. And the reason that we had to do that was because of a legal challenge that came from a lawyer. Mike, you probably can help me out with this. I believe it was from the Hawker Siddeley Aircraft Company in Great Britain, that said SEQUEL was their registered trademark. We never found out what kind of an aircraft a SEQUEL was, but they said we couldn't use their name anymore, so we had to figure out what to do about that. I think I was the one who condensed all the vowels out of SEQUEL to turn it into SQL, based on the pattern of APL and languages that had three-lettered names that end in L. So that was how that happened. ... | {
"source": [
"https://softwareengineering.stackexchange.com/questions/8588",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/666/"
]
} |
8,631 | When you are defining a function/variable/etc and are not sure what to name it, what do you name it? How do you come up with a name? If you use a temporary name as a place-card until you give it it's real name, what temporary name do you use? update I have been using things like WILL_NAME_LATER , NEEDS_NAME , or TO_BE_NAMED . I was hoping there was an adopted convention, I was actually hoping that if I used this adopted convention my IDE would highlight the name until I changed it. | It's nearly impossible to not be able to think of a name for an artifact you want to design. You may not like what name you come up with because it isn't concise or sexy, but if you think too hard, you'll end up with a poorly named artifact. Let's say you have something that helps you construct objects, but you don't know this is typically called a factory. Just call it ObjectCreator. It sounds obtuse, but at least it's clear. Let's say you have a dictionary that converts hostnames to IP addresses. Just go ahead and call it HostnamesToIpAddresses. Sure it's long, but it says exactly what it does. The inability to come up with a name for something means you don't know what it is doing, which also means you have a greater problem before you. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/8631",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/1785/"
]
} |
8,721 | Here's a bit information about me, before starting with the question. I am a Computer Science Undergraduate, Java being my primary coding language. The basic problem in my University are the teaching standards. No one is concerned about teaching coding knowledge to students, rather than just theoretical knowledge. The effect being, most of my fellow college mates don't understand programming at all. Even I haven't been able to come out of the traditional programming environment, which limits my coding to an extent. What are the possible ways by which I can develop and expand my programming/coding skills. Also, can you suggest the sources for the same? Edited :
Sources suggesting development of coding skills. | My favorite quote is from Confucius: I hear, I know. I see, I remember. I
do, I understand. All knowledge I got, was from applying one and single strategy: Take the most challenging path, always. You want to learn C#? Get a job as a C# developer. You want to learn Italian? Go there with a dictionnary english/italian, and talk Italian You want to learn coding ? Code! | {
"source": [
"https://softwareengineering.stackexchange.com/questions/8721",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/4522/"
]
} |
8,758 | Disclaimer: I by no means condone the use of pirated software. Have you ever witnessed the use of pirated software for development purposes? May be a company didn't have enough money to buy a piece of software and there were no free alternatives? May be a company wanted to try something out before buying and there were no trial licenses for that product. Whatever the circumstances, have you worked at a company where using pirated/cracked software was accepted? Were there any consequences to doing this? | While I don't have any problem when some companies or individuals use unlicensed software when they can't afford them (yet), I'm always amazed to see how commercial software development factories do it without shame. They are unrespectful to their own profession! Thanks to programs like Microsoft Bizspark (3 years of free Microsoft softwares for any startup that generate less than 1.000.000 a year in revenues), you can now get them legally. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/8758",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/3262/"
]
} |
8,886 | Interested in knowing from the more experienced ones if someone can find a job as a programmer without even a highschool degree. Consider the said person to be an average programmer. Would someone even consider giving him/her a chance on an interview ? The languages of interest would be python/php/java/c# Please answer for your region/city/country only. No "go back to school" answers please. | Your biggest difficulty is going to be getting through the HR filter. If you can do that, experience will trump education (most of the time). In the meantime, try to find some small shop that just needs someone who can code. You should also try to join an opensource project (or two) to get some experience and show that you have some skills. You are going to have to start small and build on that. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/8886",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/4276/"
]
} |
8,890 | So the other day my boss (let's call him Colfax) asked me to work on a project, but that I should not discuss it with anyone (including the other programmers on my team) and that it would have to be done outside of normal work hours. In exchange, Colfax offered me "off-the-book" vacation days equal to the hours spent on the project. When I asked Colfax if his boss (let's call him Schuyler) knew what he was offering, Colfax said that Schuyler does not know and implied that he (Colfax) would get into trouble if Schuyler found out. My boss also said that if I were to go along with this, my efforts would be remembered for "future consideration". The work is for our employer so everything there is on the up-and-up. However, I have an uneasy feeling about the whole thing. Part of me wants to work on the project -- as it's you know -- coding and coding something cool and fairly simple. On the other hand, the whole thing seems seedy and underhanded. Would I be a "bad employee" for refusing extra work? Or am I morally justified to not do the work? UPDATE I know it's been a while since I posted this question, but I thought the folks who participated in the discussion might be interested to know that Colfax quit a couple of months after this conversation. So, if I had followed along, it would have probably been for nothing. Regardless, thanks for the comments everyone. | The fact that Schuyler can't know is very suspicious. That alone makes me say: "STAY AWAY" Colfax is asking you because you think coding is neat. Maybe that means you're even good at it. But it also means that if you do it, he will almost certainly remember that you said "yes" in the future. Mostly when he has another "after hours" project for you. There's every chance that "future considerations" never materialize and he tries to get you to defer these off the book vacation days until far in the future, probably long after one of you leaves the company. If he can hide the fact that you're on an off the book vacation, why can't he just hide you in plain sight while you work on this "project that benefits the company" ? Answer:He probably can't do either. If this is one of those deals where it's truly a case that management doesn't believe in it but Colfax thinks it's worth doing, I would suggest telling him to forget about the comp days and that you want to present the results along with him (it's not for the benefit of the company unless the company finds out about it, right?). His response to that will tell you a lot about where he's standing ethically. And you could do the project with a clear conscience. That's is also the best way to make sure you get "future considerations" as his bosses will know your contribution. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/8890",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/4275/"
]
} |
9,006 | Functional programming is one of the oldest programming paradigms. However it isn't used much in the industry compared to more popular paradigms. But it have largely been emphasized in academia. What's your strongest opinion against functional programming? | The problem is that most common code inherently involves state -- business apps, games, UI, etc. There's no problem with some parts of an app being purely functional; in fact most apps could benefit in at least one area. But forcing the paradigm all over the place feels counter-intuitive. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/9006",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/18/"
]
} |
9,095 | F# and Scala are both functional programming langugages that don't force the developer to only use immutable datatypes. They both have support for objects, can use libraries written in other languages and run on a virtual machine. Both languages seem to be based on ML. What are the biggest differences between F# and Scala despite the fact that F# is designed for .NET and Scala for the Java platform? | Major Differences: Both Scala and F# combine OO-imperative programming and functional programming into one language. Their approach towards unification of paradigms is vastly different though. Scala tries to fuse the two paradigms into one (we call it object-functional paradigm), whereas F# provides the two paradigms side by side. For example, algebraic data types in F# are purely functional constructs with no OO'ness in them whereas ADTs in Scala are still regular classes and objects. (Note: In the process of compilation to CLR bytecode, even F# ADTs become classes and objects but they are not visible to F# programmer at the source level.) F# has full Hindley-Milner style type inference. Scala has partial type inference. Support for subtyping and pure-OO-ness makes Hindley-Milner style type inference impossible for Scala. Scala is much more minimalistic language than F#. Scala has a very small orthogonal set of constructs that are re-used throughout the language. F# seems to introduce new syntax for every little thing, thus becoming very syntax heavy as compared to Scala. (Scala has 40 keywords, whereas F# has 97. That should tell you something. :-) F# being a Microsoft language has an excellent IDE support in the form of Visual Studio. Things are not so good on the Scala side. Eclipse plugin is still not upto the mark. Same goes for NetBeans plugin. IDEA seems to be your best bet at the moment, though it doesn't even come close to what you get with Java IDEs. (For Emacs fans, there's ENSIME. I have heard a lot of good things about this package, but I haven't tried it yet.) Scala has far more powerful (and complex) type system than F#. Other Differences: F# functions are curried by default. In Scala, currying is available but not used very often. Scala's syntax is a mix of that of Java, Standard ML, Haskell, Erlang and many many other languages. F# syntax is inspired by those of OCaml, C#, and Haskell. Scala supports higher kinds and typeclasses. F# doesn't. Scala is much more amenable to DSLs than F#. PS: I love both Scala and F#, and hope they become predominant languages of their respective platforms in the future. :-) | {
"source": [
"https://softwareengineering.stackexchange.com/questions/9095",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/18/"
]
} |
9,099 | As a programmer, do you see any professional or other advantage in using your real name in online discourse, versus an invented handle? I've always gone by a single username and had my real name displayed whenever possible, for a few reasons: My interests online are almost exclusively professional and aboveboard. It constructs a search-friendly public log of all of my work, everywhere. If someone wants to contact me, there are many ways to do it. My portfolio of work is all tied to me personally. Possible cons to full disclosure include: If you feel like becoming involved in something untoward, it could be harder. The psychopath who inherits your project can more easily find out where you live. You might be spammed by people who are not worth the precious time that could be better spent writing more of the brilliant software you're famous for. Your portfolio of work is all tied to you personally. It seems, anyway, that a vast majority of StackOverflow users go by invented handles rather than real names. Notable exceptions include the best-known users, who are typically well established in the industry. But how could we ever become legendary rockstar programmers if we didn't get our names out there? Discuss. | The biggest thing I can think of is both an advantage and a disadvantage: everything you put online under your real name will follow you . This is good when you are posting good, constructive things. It's bad when you post a picture of you from that night or when you say something offensive or just plain stupid. I find that using my real name helps keep me in check -- I think more about what I say and how I say it. But it has on occasion been inconvenient when using my name invited personal attacks for various reasons. All in all, my approach is to use my real name when dealing with professional-ish stuff and to use a handle for personal interests and things I might not want to be as easily searchable. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/9099",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/2107/"
]
} |
9,180 | Possible Duplicate: I've graduated with a Computer Science degree but I don't feel like I'm even close to being an expert programmer I recently graduated from university, and I have since then joined a development team where I am by far the least experienced developer with maybe with a couple work terms under my belt. Meanwhile, the rest of the team is rocking 5-10 years experience. I was a very good student and a pretty good programmer when it came to bottled assignments and tests. I have worked on some projects with success, but now I'm working with a much bigger code-base, and the learning curve is much higher. I was wondering how many other developers started out their careers in teams and left like they sucked. When does this change? How can I speed up the process? My seniors are helping me but I want to be great and show my value now. | The interesting thing about software development is that it doesn't matter how good you are -- there is always someone better or at least different enough to still teach you something. It's also not uncommon to look at code written a few months ago and think it sucks, regardless of your level of experience. For me, once I realized the gap between my skills and the skills of my coworkers, I started learning like I've never learned before -- reading other people's code, blog posts, books, paying attention to how my coworkers accomplished things, etc. University prepared me for computer science, but not really for software development. It's almost 4 years later, and I'm a much stronger software developer than I used to be. So, just hang in there and learn as much as you can from the people around you. It'll get better. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/9180",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/2446/"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.