source_id
int64
1
4.64M
question
stringlengths
0
28.4k
response
stringlengths
0
28.8k
metadata
dict
73,304
We must have all come across them - developers that have been around for ages and have a fantastic domain knowledge and yet they fail to share that knowledge with their team. The team desperately needs to share the knowledge, but they can't seem to pry it out of the hoarder. In what ways have teams successfully solved this problem?
Remove code-ownership from the team. Spread the workload. Do code-reviews. Organise knowledge transfer sessions, wait a few sessions and then ask them to do a presentation on their area. It is, of course, imperative that if you're not the manager then you have your manager's backing, but if everyone on a team is regularly sharing information, there are only so many excuses someone can come up with for not doing the same thing. Also, his manager should sit down with him and explain that this doesn't threaten his job. Because that's why he's doing it. It is a good thing for the individual not to be the font of all knowledge. It frees him to do other, more interesting, things.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/73304", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/8811/" ] }
73,334
I often hear co-workers saying to each other, "That's a horrible, horrible hack." What I can take away from that is that it's not good. When I asked them if it works they say "yes, but it's not good". Does that mean it's not a good solution? How is a solution bad if it works? Is it due to good practice? Or not maintainable? Is it using a side effect of code as a part of your solution? It's interesting to me when something is classified as a hack. How can you identify it?
It's applying a temporary band aid to a large gaping wound. It's fixed for now, but it is going to cause even more problems later. An example I've recently seen: You want a person named "Jim" , to always appear first in an alphabetical list. To quickly solve it, you rename him to " Jim" . This is a hack that will surely come back to bite you later.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/73334", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/2446/" ] }
73,467
I've currently inherited an application at work and to my dismay, I have realized that the user passwords stored in the database are encrypted using an in house encryption function, which also includes the ability to decrypt. So all someone really needs to do is copy the user table and copy the encryption assembly (anyone with database production access) and then they would have access to 100,000 email addresses and potential passwords for them. I'm trying to explain to the business why this is not a good idea, but the security concepts seem to go over their head as they are not that technically minded (it's for government). Plus there are actually existing functionality within the application for admin users to retrieve user's passwords in order to log in as them and do stuff (which they have said, they require). So they don't understand the security implications. And in order to implement a stronger security policy (hashing passwords so they can't be easily retrieved), I have to remove existing functionality for them. What should I do? I didn't build the password system in the first place, so it's not like I can be blamed if anything does go wrong. On the other hand, I don't feel good about it and I also don't want to have access to 100,000 potential email logons.
Implement the functionality they need in a secure way. Administrators logging in as another user can be implemented without them knowing the user's password. They can log in as themselves, and then have some 'change-identity' function available. Securing a password database is not a business concern, it is a technical concern. Not doing so is a bug. If the business thinks of security as a functionality tradeoff, security will lose. You should not give them any reason to think of it this way.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/73467", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/3214/" ] }
74,086
Lately, I've been working a lot in PHP and specifically within the WordPress framework. I'm noticing a lot of code in the form of: if ( 1 == $options['postlink'] ) Where I would have expected to see: if ( $options['postlink'] == 1 ) Is this a convention found in certain languages / frameworks? Is there any reason the former approach is preferable to the latter (from a processing perspective, or a parsing perspective or even a human perspective?) Or is it merely a matter of taste? I have always thought it better when performing a test, that the variable item being tested against some constant is on the left. It seems to map better to the way we would ask the question in natural language: "if the cake is chocolate" rather than "if chocolate is the cake".
The main reason to do this (so-called "Yoda conditional") is to prevent accidents whereby you accidentally use an assignment operator ( = ) instead of the equal comparison operator ( == ). That is, if you made the mistake of doing: $foo = 5; if ($foo = 1) { // Stuff } The statement will evaluate to true (or, in the case of some languages—like PHP—a truthy value) and you'll have a hard-to-find bug. But if you did: $foo = 5; if (1 = $foo) { // Stuff } You'll receive a fatal error because you can't assign $foo to an integer. But as you pointed out, reversing the order generally makes things less readable. So, many coding standards (but not all, including WordPress ) suggest or require $foo == 1 despite the bug hunting benefits of 1 == $foo . Generally, my advice is to follow whatever established coding standard there is, if there is one: for WordPress, that means using Yoda conditionals. When there isn't, and it's impossible to establish one through consensus with your peers, it's dealer's choice.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/74086", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/23775/" ] }
74,142
Dijkstra writes here : Besides a mathematical inclination, an exceptionally good mastery of one's native tongue is the most vital asset of a competent programmer. I do not understand the latter part of this quote. Can you please explain or elaborate? P.S. I have grown up in India. I speak Bengali at home; I speak Marathi in the community that I live in; Hindi is the national language and very widely spoken, so I know that, and in school and college I was taught with English as the first language. Of course, now I think in a multitude of languages and I must admit I don't have mastery over any . Is this really affecting my programming aptitude? If yes how ? and are there any solutions ?
While I agree with what alex and quickly_now has said I believe that there may be a different spin. This is my own theory and I am not suggesting that Dijkstra meant the same thing. What is "mastery of a language": It is the ability to take the basic building blocks of a language and put them into constructive, useful phrases and sentences. Alphabets and characters are meaningless in themselves. You need to put them together and get a meaning out of it. Words are meaningless by themselves; it is only when you put them in a proper sequence based on syntax and grammar that they express concrete ideas. Isn't it exactly the same in computer programming? We put together a few keywords and symbols and make concrete workable stuff out of them. A programming language has symbols and grammar just like a natural language. Mastery of a programming language requires the ability to put these (individually meaningless) symbols and rules together to make something meaningful and useful. I believe this means that there is a direct corellation between a person's ability to learn a human language and a computer lanugage. Both need the same set of human abilities and thinking capability. Take a look among your colleages, and you will find that those with poor programming skills are also the ones who can't speak or write as clearly as others. Those who are good at picking human languages have the skills neccessary to become good programmers too.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/74142", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/22098/" ] }
74,222
At the moment I'm 23 and working as a junior programmer at a software service provider. While I'm really happy with my job and my colleagues, I sometimes would love to have someone who could tell me what is bad about my code (architecture and so on), why it is bad AND what could I do to make it better. As we have no senior java programmers anymore (I'm the last one who can program java), I would love to get your advice about becoming a better programmer. Disclaimer: Sometimes I ask my non java programmer colleagues if my code is "good" or "bad" but I have the feeling that they can't judge because they aren't java programmers. (They hardly program anything) Are there any communities which are happy to mentor junior developers? Would it help if I read tons of books? Any advice how I can improve my existing skills and learn if what I'm doing is bad or good? Epilog: I didn't even think about switching the employer back in May when I asked this question but what all of you said was stuck in my head and I decided to switch in April. I now found a new employer and I got a position as a junior Java developer at the new company. I hope my professional growth will get better in future with my new employer. I just wanted to say thank you for all of your advices.
I was in your position once where my employers small programming team dissolved fairly quickly to just me when I was right out of college. My recommendation, for your professional growth, is start looking for a new employer now . Being on a team of one will foster bad habits that will be hard to break when you work on larger teams later in your career. I've become a fan of young developers spending some time in consulting (at a salary consulting shop, not an hourly contracting style shop) then working as an employee in a small shop. You'll get to work in several different organizations and under several different leads/architects from your company and your clients. You'll see different archtectual styles and programming styles firsthand, while witnessing the strengths and weaknesses of them. Think of it like the rotations medical school students do during their last two years of medical school, where they spend a couple months in bunch of different departments at several hospitals. It lets them see a bunch of different fields and situations, giving them greater bredth of experience then just what they want to specialize in. Software developers need to do the same thing, because for the most part, there never is only one way to do something right. [You can subsitute consulting for working for a large company that explicitly lets people change roles and teams relatively freely as this will get you almost as good of a set of experiences.]
{ "source": [ "https://softwareengineering.stackexchange.com/questions/74222", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/2892/" ] }
74,272
With Javascript appearing to be the ubiquitous programming language of the web over the next few years, new frameworks popping up every five minutes and event driven programming taking a lead both server and client side: Do you as a Javascript developer consider the traditional Design Patterns as important or less important than they have been with other languages / environments?. Please name the top three design patterns you, as a Javascript developer use regularly and give an example of how they have helped in your Javascript development.
Do you as a Javascript developer consider the traditional Design Patterns as important or less important than they have been with other languages / environments?. Classical design patterns do not apply to JavaScript. What does apply is writing modular and functional code. You should use a mixture of Constructors and first class functions. As a JavaScript developer I personally push towards treating JavaScript as LISP rather then Java. So try to emulate monads and high level functional style code rather then trying to emulate classical OOP code. Please name the top three design patterns you, as a Javascript developer use regularly and give an example of how they have helped in your Javascript development. Again design patterns do not really apply that much but below are three important constructs. Use of closures Use of first class functions Use of Object factories with or without new Please leave some kind of context for which I can show examples of these kind of techniques compared to doing the same kind of code using traditional design patterns. Let's take a look at some of the classical Design Patterns and how to implement them in js as well as alternative patterns more suited to js itself: Observer Pattern: In node.js this is simply events.EventEmitter . In jQuery this is $.fn.bind && $.fn.trigger . In backbone this is Backbone.Events.trigger and Backbone.Events.bind . This is a very common pattern used in day to day code. I never stop and think "Hey I'm using an observer pattern here!". No this is just a low level way to pass messages around or a way to cascade change. For example in backbone all the MVC views bind to the models onchange event so changing the model cascades any changes automatically to the view. Yes this is a powerful pattern, but it's use is so common in event driven programming that were not realising were using it everywhere. In the WebSocket prototcol we have .on which we use to bind to on("message", ... events. Again this is very common but it's an observer on a stream rather then your classical OOP based while (byte b = Stream.ReadNextByte()) . These are all powerful uses of the Observer pattern. But this isn't a pattern you use. This is a simple part of the language. This is just code. Memento Pattern: This is simply JSON. It allows you to serialize the state of an object so you can undo an action. function SomeObject() { var internalState; this.toJSON = function() { return internalState; } this.set = function(data) { internalState = data; } this.restore = function(json) { internalState = JSON.parse(json); } } var o = new SomeObject(); o.set("foo"); // foo var memento = JSON.stringify(o); o.set("bar"); // bar o.restore(memento); In JavaScript we natively support an API for mementos. Just define a method called toJSON on any object. When you call JSON.stringify it will internally call .toJSON on your object to get the real data you want to serialize to JSON. This allows you to trivially make snapshots of your code. Again I don't realise this a memento pattern. This is simply using the serialization tool that is JSON. State Pattern / Strategy Pattern: You don't need a state pattern. You have first class functions and dynamic types. Just inject functions or change properties on the fly.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/74272", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/24695/" ] }
74,334
I'm developing a web site in php in localhost and as modules of it gets completed, I upload it on the cloud so that my friends can alpha test it. As I keep developing, I've lots of files and I lose track of which file I've edited or changed etc. I've heard of something as 'version control' to manage all those but am not sure how it works. So, my question is: Is there an easy way/service/application available to me to track all the edits/changes/new files and manage the files as I develop the website. As Soon as I'm done with a module, I want to upload it on the cloud (I'm using Amazon Cloud Service). If something happens to the new files, I might want to get back to the old file. And maybe, in a click or two, I get to see the files which I've edited or changed since the last one I've uploaded?
Software configuration management , of which Version Control is part, is a little more complex than keeping track of changes to files, although you can certainly start with that. But do read the Wikipedia articles linked above along with Joel Spolky's tutorial on Mercurial . To start, choose one of Mercurial, GIT, or Bazaar, in that order, and install it along with tools for your IDE and operating system (I prefer Mercurial with HGE for Eclipse). Initialize a repository from your working directory ( hg init with Mercurial).. Decide which files and directories you want to track and which not. The general rule is not to track files that are generated by compilers and other tools. Use the command to add the files and directories to the repository ( hg add for Mercurial). Tell the tool about the patterns for the files you don't want to track (edit .hgignore for Mercurial). Perform a commit to track the original versions ( hg ci ). Perform a commit after each logical milestone, even if it's a small one. Add new files as you create them. Repeat the last two. Backup your working directory and the repository as frequently as reasonable. With your files in the repository, you can know the differences between any two versions of a file or directory, or the complete project ( hg diff ), see the history of changes ( hg hist ), and roll back changes ( hg up -r ). It is a good idea to tag ( hg tag ) the repository before publishing your code so there's an easy way of going back to exactly what you published for amendments or comparisons. If you want to experiment with a different line of development, do it in a simple branch by cloning the main repository ( hg clone ) and not pushing back until the experiment is conclusive. It is as easy as having a different working directory for the experiment. If the experiment is for a new, upgraded version then clone and then branch ( hg branch ) so you may keep all copies of the repositories updated without one experiment interfering with the other. Linus Torvalds (who deals with tens-of-thousands of files and millions of lines of code in his projects) gave a talk at Google about why the tool can't be CVS, SVN, or any of the many free and commercial ones around; it is very much worth watching.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/74334", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/45865/" ] }
74,626
After reading through a few "job hopping" related threads recently, I've been thinking how the opposite of job hopping can also be a problem. I've known many people (especially in large, relatively sluggish companies) who got comfortable in a cushy and unchallenging role and stayed around for a very long time - say 10 or 15 years or even more. They might have moved around internally a little, but it was mostly a case of " one year of experience 15 times over " as seasoned hiring managers would say. Or to put it another way, they were "Special Projects" cases . Just sitting in a comfortable role where no more learning is going on, but that might look okay on paper (on their CV) if the various stuff they were involved with is embellished a bit. What really got me thinking about this is that the longest role on my CV (almost 6 years) fits into this category somewhat, at least mildly. If I was being completely intellectually honest, I'd say I really only got 3 solid years of learning experience from it. The last 2-3 years were cushy maintenance mode. So I know first hand that it's quite possible than many "seniors" with 15 years experience (if they were in a job like that the whole time) might not be as broadly experienced and "senior" (in terms of having 15 years of quality experience) as they look on paper. So my question is - does hanging around in the same job for very long raise any red flags? For example: if you see a CV which has only one 15 year job on it after college, as opposed to an equally experienced person who has several 4-5 year stints instead, does the single-job guy look like a possible "Special Projects" case for only having had one very long job? My experience suggests that it's quite likely. Or at least that the guy with several 5 year stints is probably more dynamic and adaptable, from having experienced a variety of roles, environments and technologies (and different uses even if using the same technologies across all jobs). EDIT: Note that I am not personally worried that my history looks like this. My longest role above just serves as a mini example of what can happen with cushy long term roles, which got me thinking about this in general terms (If anything, my actual employment history (except for that longest role) leans more towards being a bit too job hoppy).
Red flags? No. Yellow flags? Yes. It is something I'm going to pry about in the interview, but there are plenty of good answers that I'll accept. I've been worrying that I'm heading up to 5 years in my current job but then I look back at it and I've spent 2 years as a senior developer, followed by 3 as a team leader (under three different managers from whom I've learned different skills). And there is talk about a completely different role that I might be interested in filling soon. I'm not too worried if I'm there for 20 years, as long as I keep getting variety. Or as you say, as long as I don't keep feeling that I'm repeating the same year of experience. However, in the same company, I know a good half-a-dozen lifers. They're there for as long as the company will keep them and they will never progress beyond a certain role. And they are absolutely people you do not want to hire.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/74626", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/5064/" ] }
74,760
I heard Javascript is a full language just like c++. Is this true? What else is it good for programming besides web stuff?
I think it's worth learning because it's quite different to your run-of-the-mill OO language and at the very least you'll learn a different programming paradigm. Can it be useful anywhere other than in-browser? Sure: check out node.js , which uses javascript's asyncronicity to create a purely non-blocking dev platform, and couchapps , which lets you build an entire web app with it. If you believe some people, javascript will be the major future dev language, purely because of its wide usage. It's by far the most popular language on github, and almost every dev has some exposure to it. With projects like node.js, javascript has an interesting future.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/74760", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/10922/" ] }
74,764
I am a recent (as of yesterday) college grad - BS in Computer Science. I've been a huge fan of version control ever since I got mad at an assignment I was working on and just started from scratch, wiping out a few hours worth of actually good work. Whoops! Since then, I've used Bazaar and Git (and SVN when I had to for a Software Engineering class), though mostly Git to do version control on all of my projects. Usually I'm pretty faithful about how often I commit, but on occasion I'll go for a while (several functions, etc.) before writing a new commit. So my question for you is, how often should you make commits? I'm sure there's no hard and fast rule, but what guideline do you (try to) follow?
Whenever I have something that works (meaning, doesn't break anything for anyone else) I'll do a check-in. I write most of my stuff test first, so that means every time I have a new test that passes, I check-in. In practice that means a few times per hour. At least a couple of times every hour, with 5 being quite a bit.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/74764", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/33/" ] }
74,784
So I am designing a webpage which will be used for global users, includes users from Canada, US, India, UK etc. I need to apply validations for this phone number field, but I'm not sure the best way to go about this. Some of the valid formats I can think of are: 1800123456 (India) use of "-" in US phone numbers I am a bit confused about what special characters I should allow a user to enter (eg. - / () ). How have others solved this in the past?
Just because you can constrain something (that's what you're paid for, as a programmer, to write code), that doesn't mean you should actually do this. What's the point of validating a number? Why is it useful? Will it fail if a user enters "0800DIALTHIS", or "私は電話番号を持っていない", or "(499) 123-45-67 добавочный 4425"? It surely will (you say "global". don't you?), while users just wanted to convey you an important piece of information on how to contact them via phone. Besides, they way data are stored should be governed by the way they're used. How will you use the numbers? Will they be used to auto-send SMS spam—or they'll be manually dialed by managers? If it's the latter, then allowing any additional characters is okay, since human brain will be able to parse them the way they see the most appropriate; and in this case you don't really need any validations, which will just annoy human users.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/74784", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/13444/" ] }
74,913
Learning to manage stress is vital to staying healthy while working at any job. A necessary subtask is learning to recognize and limit the sources of stress. But, in the midst of the daily grind, it can be difficult to recognize sources of stress (especially for an intense, focused persona such as a programmer). What types of stressors should programmers look out for, and how can they be managed?
Here are the things I find cause the most stress to me and the developers around me: Ambiguity : Ineffectively stated goals, requirements, or other expectations. Many companies have employees who have an attitude of "I don't know what I want, but I'll know it when I see it. Oh, and by the way I need it tomorrow." Inappropriate deadlines : Most deadlines are set by the business need not by the realistic capabilities of the developers on staff. In addition to this, the expectations for the requirement are increased but the budget/resources are not. Bad assumptions/expectations : Programmers have a tendency to have a high opinion of their capabilities (not an unearned trait), and because of this they expect that other people can match their capabilities, understanding and expectations. Often an assumption will be made that something is "common knowledge" or the like, and this can be catastrophic in the stress category. Now, not only did the business expert not meet the programmer's expectations, but is a complete incompetent to boot. In the converse, if the programmer fails to meet the business's expectations, the programmer is left frustrated because he/she was not given the information they needed to proceed. Lack of respect : Many people have a tendency to believe that just because someone is weak in your discipline that it means they are weak in theirs. There is a reason we all have different jobs/capabilities/expectations, and it is important to respect that the other person is very likely very capable at the tasks they are asked to do. Just because someone doesn't have your capabilities doesn't mean they are incompetent or incapable. Lack of self control : This can be manifested in many things. Perhaps you're a work-a-holic who refuses to take the proper breaks. This leads to build up of stress and is bad. Perhaps you're a Jolt Cola drinker who drinks more caffeine that he should when stress builds up. This is bad for your health and makes your stress response worse. You have to know your limits, know what triggers your specific stress responses, and most importantly know how to relieve that stress response. Taking it out on co-workers or colleagues is not appropriate and will only serve to increase stress. Lack of communication skill : Often we don't speak the same language, and I'm not talking about English, German, or Indian. We're using the same words, but we're not saying the same things. People need to be specific and open about things they don't understand. Even if you think you understand, it does not hurt to clarify. Remember that a business metric can mean something different to different departments in an organization. Bleeding of limits : Keep work at work and home at home. Just because your 7-year-old is leaving his shoes in the middle of the floor and not cleaning up after his breakfast does not mean you need to chew Tiffany from accounting a new one because she hasn't given you the spreadsheet of billing requirements. Same token, just because Tiffany is slacking with the spreadsheet doesn't mean that your wife deserves to be treated poorly on the commute home. (btw, poor Tiffany doesn't deserve that treatment either)
{ "source": [ "https://softwareengineering.stackexchange.com/questions/74913", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/20202/" ] }
75,039
I often see in C and C++ code the following convention: some_type val; val = something; some_type *ptr = NULL; ptr = &something_else; instead of some_type val = something; some_type *ptr = &something_else; I initially assumed that this was a habit left over from the days when you had to declare all local variables at the top of the scope. But I've learned not to dismiss so quickly the habits of veteran developers. So, is there a good reason for declaring in one line, and assigning afterwards?
C In C89 all declarations had to be be at the beginning of a scope ( { ... } ), but this requirement was dropped quickly (first with compiler extensions and later with the standard). C++ These examples are not the same. some_type val = something; calls the copy constructor while val = something; calls the default constructor and then the operator= function. This difference is often critical. Habits Some people prefer to first declare variables and later define them, in the case they are reformatting their code later with the declarations in one spot and the definition in an other. About the pointers, some people just have the habit to initialize every pointer to NULL or nullptr , no matter what they do with that pointer.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/75039", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/2624/" ] }
75,061
I've just read this page http://weblogs.asp.net/scottgu/archive/2010/06/10/jquery-globalization-plugin-from-microsoft.aspx One of the things they did was to convert the arabic date to the arabic calendar. I'm wondering if it is a good idea at all to do so. Will it actually be annoying/confusing for the user (even if the user is Arabic). Also, my second question is that do we really need to change 33,899.99 to 33.899,99 for some cultures like German? I mean it doesn't hurt to do so since the library already does it for us but wouldn't this actually cause more confusion to the user (even if he is German, etc). I'm sure whatever culture these people come from, if i give you a number 33,899.99 there's no way you'd get that wrong right? (unless my website/application is the first website/application you've ever used in your entire life, which arguably is possible but the probability is just that low) I meant "universal" as the format that everyone will see and know what it means. It doesn't have to be some standard written in black-and-white and the like. As long as everyone can read it and know straightaway without confusion what the text is representing, that's universal. To be sure, 1.234,00 is definitely not universal. I mean i'm very sure you can find someone who in their entire lifetime, has been using computers yet have never came across this number format at all. Since most websites/apps had been using 1,234.00 without changes to accommodate localization, I believe that it has been the de-facto (the universal format that everyone will see and know what it means). As for dates, if we write 01/02/03 I'm sure there's no way anyone will know (straightaway, rightaway, without ambiguity) what date it is. But no one can get Jan 2 2003, Feb 1 2003, Feb 3 2001 wrong if we wrote them as such isn't it? Btw this question is targeting localization, don't tell me stuff like "Hey not everyone reads English alright!" because that is a matter of internationalization (which is beyond this topic). Let's stick to the discussion on localization.
Why should non-Anglos have to decode dates, numbers, etc. while Anglos can just read them? Numerical and date localization is absolutely necessary if you want non-Anglos to feel, you know, welcome as users and customers. Why should a German user have to work out what your number is instead of, you know, getting it in his or her own language's format? Further, your view of number formats (and dates: q.v. below) is hopelessly simplistic. For example undoubtedly you'd find numbers like 1,234,567 "natural" and "obvious" and "logical" ... but what about people who come from cultures with myriad-based numbering schemes? My students (Chinese), for example, are always confused about numbers over 1000 because they group numbers differently . A more "natural" grouping for their thought processes (which include a myriad above the thousand point) is 123,4567. Further there are many contexts in which the European number systems in general are simply not suited. It would be nice in those circumstances to be able to write the all-Chinese 一百二十三万四千五百六十七 or even various hybrid systems that are in common use here. Your idea for dates is wrong-headed too. You've correctly pointed out how 01/02/03 is ambiguous (if only because Americans refuse to comply with standards on dates) and suggest instead that Feb 3 2001 is unambiguous. I'm not sure, however, if you've noticed something there. It's unambiguous and unambiguously English . Going back to my students, I'm pretty damned certain that they'd far prefer to see 2001年2月3日 (or even 二〇〇一年二月三日) which is both unambiguous and, get this, something they can read without having to decode. The bottom line on i18n and l10n: Do you want money and/or users? You make what your users want. Your users want things in their own language, not in yours. End of story. edited to add It gets even worse than myriad-based systems. Take a look at Indian numbering for this lovely progression: 1 10 100 1000 10,000 1,00,000 10,00,000 1,00,00,000 ...and so on up to: 100,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,000 See that grouping by three at the end? See that grouping by two after the grouping by three? See the sudden reintroduction of a group by three again? further edited to add (I just can't keep off this subject it seems!) Even the assumption of decimal number systems being universal is wrong. There are native numbering systems that are 4-based, 5-based, 8-base (octal), 10-based (decimal), 12-based, 20-based and even 60 -based. These are all systems which have been in active use by real people (as in not made up for science fiction stories). Not all of these are still living (although we can see, for example, vestiges of 12-, 60-based numerical systems in English terminology). As for dates, let us not forget the lunar calendars still in active use in much of the world. The Muslim world tends to use a lunar calendar where the dates can drift throughout the whole year while the Chinese use one with a complicated system that keeps the dates never more than a month away from true. (And that's just naming two off the top of my head.)
{ "source": [ "https://softwareengineering.stackexchange.com/questions/75061", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/24257/" ] }
75,225
When I first started programming, I assumed that I would one day get to the point where I would start a project by sitting down and sketching a UML diagram of all the classes, then pretty much stick to that. I have now been programming for a couple years and it is not turning out that way. As I go through a project, I am often saying "Hey, I need a class to do _ _. I didn't think of that before." "Wait, this function should really be in that class instead of this one. I'll move it over." "This should actually be two classes instead of one. I'll split it up." "I should make these three stand-alone classes all inherit from one abstract class." Etcetera, etcetera. Is it a bad sign that I am often redesigning like this as I go along? Does this mean I'm a poor programmer or is this normal?
This is a normal part of development. Two of the core tenets of Continuous Design is that: You are not omniscient, you can't know the whole system from beginning to end before you start. Design is not static. This is more prevalent when a project has been in use for a long time and the problems it is now solving are not the problems that it was solving when it was first written. My personal view on the matter is that you should have a good idea of the macro design, but allow the micro design to evolve. Another way of expressing it is that the high level design (which is as far as I go with UML/modeling tools) will most likely remain pretty static over the life of a project. The detailed design of which methods do what and the class hierarchy need to be free to be malleable. When you really don't know a lot about the problem you are solving, you'll make a lot more initial mis-steps. However, after you've worked with it long enough, the overall design will start settling in place and the refactorings you are talking about are all that will be needed to keep the code tidy.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/75225", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/24967/" ] }
75,269
A common question in Tech Interview is to design a particular system, usually an existing product of the company. For example, "Design Google Docs". What is the expected answer for such a question? I mean, such systems surely have a complex design which is beyond the scope of any interview. What are the interviewers expecting in such a short time?
Insight into how does your brain look at this problem. Here are a few starting points that I could see for how one could try to have this conversation: Top-down - Looking down from a very high level build out a design and flesh out the design as various components get done and here are a handful of components that I could see.... Bottom-up - Looking from the ground up, here are bits and pieces one could build to try to put together.... Requirement clarification - Asking questions about the projected scale, size, budget, and team used for this design. You could try to have a person code a very simplified word processor or you could plot to spend hundreds of millions of dollars to make the ultimate document management system that you believe is how you Google Doc taken to an extreme. Also in here is the ability to ask something like, "What do you mean by Google Doc? How much of that functionality are you wanting to duplicate?" questions as well. The key is how well can you communicate your thoughts and approach on tackling this kind of problem as you may get a user approach you and ask, "Psst, could you make something like this in 2 weeks?" that could actually happen. Thus, how you give the answer is more important than what is the answer. My personal opinion would be that past projects aren't a good idea here. What one is trying to find is what kind of creativity and communication skills in a new area rather than just recalling how something was done in the past. Chances are that while something that happens in the new position may be similar to something from the past, there may be just enough differences that the old solution isn't feasible. This is why while what may be built is similar to an existing application, there may be various customizations that make the solution quite different from the initial example. Interviews are a two-way street. Managers and other developers are rarely masters of interviewing so I'm not sure I see the value in trying to state that they should be subject matter experts at job interviews. Recruiters I could see expecting to know how to do an interview, but there are plenty of poor recruiters that could be used as examples of why this isn't always a good idea.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/75269", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/1560/" ] }
75,390
Many questions and answers on the C/C++ pages, specifically or indirectly discuss micro performance issues (such is the overhead of an indirect vs direct vs inline function), or using an O(N 2 ) vs O(N log N) algorithm on a 100 item list. I always code with no concern about micro performance, and little concern about macro performance, focusing on easy to maintain, reliable code, unless or until I know I have a problem. My question is why is it that a large number of programmers care so much? Is it really an issue for most developers, have I just been lucky enough to not to have to worry too much about it, or am I a bad programmer?
I think everything on your list is micro-optimization, which should not generally looked at, except for using an O(n*n) vs O(NlogN) algorithm on a 100 item list which I think should be looked at. Sure, that list is 100 items right now, and everything is fast for small n , but I'd be willing to bet soon that same code is going to be reused for a several million line list, and the code is still going to have to work reasonably. Choosing the right algorithm is never a micro-optimization. You never know what kinds of data that same code are going to be used for two months or two years later. Unlike the "micro-optimizations" which are easy to apply with the guidance of a profiler, algorithm changes often require significant redesign to make effective use of the new algorithms. (E.g. some algorithms require that the input data be sorted already, which might force you to modify significant portions of your applications to ensure the data stays sorted)
{ "source": [ "https://softwareengineering.stackexchange.com/questions/75390", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/24028/" ] }
75,486
A key issue with mainframes is that the cohort of supporting programmers is dwindling. While normally this wouldn't be a problem in that a falling supply of programmers would be offset by an increasing amount of salary those causing a rising supply of programmers via the law of supply and demand, I'm not sure this is really happening for mainframes. While they still form critical infrastructure for many businesses, the simple fact is there isn't an adequate number of young programmers coming up along to keep the support population populated. Why is this? What makes mainframes unattractive to young programmers?
I'm an old programmer and I'm not interested in mainframes. My reasons will probably be similar to the reasons given by young programmers, however, albeit without the ignorance of the technology so evident in many of these answers. First, let's get the ignorance out of the way: The various claims of inability to try out mainframes are false. Hercules has been available since 1999—likely for longer than many of the people answering have been programming—and despite IBM's whinging over it the odds of it going away anytime soon are negligible (especially given that it's open source). While it is, in fact, true that you cannot (legally) run the expensive software for it, there is plenty of software available you can run on it, including software that is actually still in fairly common use out there. Again, contrary to public opinion, there is more to mainframes than COBOL, CICS and RPG2. Indeed almost (but not quite) anything you can run on your PC running Linux you can run on a mainframe. <irony> I'm not sure why. </irony> So why is it that I've avoided mainframes for all my life after encountering them in school? Well: While it is true that you can use more than COBOL, CICS, RPG2, etc. in mainframes, odds are very high that if you work with them this is what you'll be relegated to doing. Even worse, despite COBOL having been massively "modernized" in the past two decades or so (scare quotes because I still don't think it's a very modern language), most of the coding you'll do in COBOL will still be in old-style code because... There's very little actual new development going on in mainframes. If you land a job at IBM working for their mainframe R&D division you might get the chance to do new development (and in that case you might even really enjoy your job!). In reality, though, face it: you won't be working there. You'll be working in the back room of some financial institution or other maintaining 50-year old COBOL code written by someone who still thinks that 64KB is a whopping huge pile'o'RAM. (This same guy will probably be your boss.) While it is true that you can run Linux on mainframes, and thus have access to pretty much any programming language or environment you'd like, again, as with working for IBM's mainframe R&D, you're not going to get that job. It's back to maintaining that 50-year old COBOL. Corporate programming is very efficient at sucking the soul out of you (and remember, it's corporate programming you're going to be doing as a mainframe programmer unless you're VERY lucky). It's a ghetto, and an ever-shrinking one. (It's like MUMPS this way.) If you get too steeped in mainframe lore you get further distanced from anything non-mainframe. You can try to keep up, but you won't. I know someone pointed out that mainframes have grown in sales while other server sectors shrunk a bit, but server programming is the minority these days. Hell PCs in general are losing importance. The world of programming is very wide and very diverse and having one minuscule portion of it grow in comparison to another minuscule portion is meaningless when compared to, say, the sudden, explosive growth of programming in something as trivial as the iPhone (which itself is a minority platform – by far). No, start working in mainframes and you'll only have other mainframers to share your thoughts, your joys and your rages with – and they're a dying breed. This leads to a negative feedback loop which makes the herd shrink even further and faster. I'm sure there's lots of reasons that a mainframe programmer could give why the career is rewarding and full of joys and interesting challenges. Indeed I've heard many of them from people trying to recruit me into the field. In the end, however, I remained unconvinced, mostly because of the ghetto problem. If I got in and found I didn't like it, how do I get out?
{ "source": [ "https://softwareengineering.stackexchange.com/questions/75486", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/17561/" ] }
75,487
I work at a company that only uses stored procedures for all data access, which makes it very annoying to keep our local databases in sync as every commit we have to run new procs. I have used some basic ORMs in the past and I find the experience much better and cleaner. I'd like to suggest to the development manager and rest of the team that we look into using an ORM Of some kind for future development (the rest of the team are only familiar with stored procedures and have never used anything else). The current architecture is .NET 3.5 written like .NET 1.1, with "god classes" that use a strange implementation of ActiveRecord and return untyped DataSets which are looped over in code-behind files - the classes work something like this: class Foo { public bool LoadFoo() { bool blnResult = false; if (this.FooID == 0) { throw new Exception("FooID must be set before calling this method."); } DataSet ds = // ... call to Sproc if (ds.Tables[0].Rows.Count > 0) { foo.FooName = ds.Tables[0].Rows[0]["FooName"].ToString(); // other properties set blnResult = true; } return blnResult; } } // Consumer Foo foo = new Foo(); foo.FooID = 1234; foo.LoadFoo(); // do stuff with foo... There is pretty much no application of any design patterns. There are no tests whatsoever (nobody else knows how to write unit tests, and testing is done through manually loading up the website and poking around). Looking through our database we have: 199 tables, 13 views, a whopping 926 stored procedures and 93 functions. About 30 or so tables are used for batch jobs or external things, the remainder are used in our core application. Is it even worth pursuing a different approach in this scenario? I'm talking about moving forward only since we aren't allowed to refactor the existing code since "it works" so we cannot change the existing classes to use an ORM, but I don't know how often we add brand new modules instead of adding to/fixing current modules so I'm not sure if an ORM is the right approach (too much invested in stored procedures and DataSets). If it is the right choice, how should I present the case for using one? Off the top of my head the only benefits I can think of is having cleaner code (although it might not be, since the current architecture isn't built with ORMs in mind so we would basically be jury-rigging ORMs on to future modules but the old ones would still be using the DataSets) and less hassle to have to remember what procedure scripts have been run and which need to be run, etc. but that's it, and I don't know how compelling an argument that would be. Maintainability is another concern but one that nobody except me seems to be concerned about.
Stored procedures are bad, they're often slow and approximately as efficient as ordinary client side code. [The speedup is usually due to the way the client and stored procedure interface is designed and the way transactions are written as short, focused bursts of SQL.] Stored procedures are one of the worst places to put code. It breaks your application into two languages and platforms according to rules that are often random. [This question will be downvoted to have a score of about -30 because many, many people feel that stored procedures have magical powers and must be used in spite of the problems they cause.] Moving all the stored procedure code to the client will make things much easier for everyone. You'll still have to update the schema and ORM model from time to time. However, schema changes are isolated from ORM changes, allowing some independence between applications and database schema. You will be able to test, fix, maintain, understand and adapt all those stored procedures as you rewrite them. Your app will run about the same and become much less fragile because you're no longer breaking into two different technologies. ORM's are not magic, and good database design skills are absolutely essential to making it work. Also, programs with a lot of client SQL can become slow because of poor thinking about transaction boundaries. One of the reasons stored procedures appear to be fast is that stored procedures force very, very careful design of transactions. ORM's don't magically force careful transaction design. Transaction design still has to be done just as carefully as it was when writing stored procedures.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/75487", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/22390/" ] }
75,593
One of the devs on my team believes that it is necessary to write a javadoc comment for EVERY parameter in a method's signature. I do not think this is necessary, and in fact I think it can even be harmful. First off, I think parameter names should be descriptive and self-documenting. If it's not immediately obvious what your parameters are for, you're probably Doing it Wrong. However, I do understand that sometimes it's unclear what a parameter is for, so in those cases, yes, you should write a javadoc comment explaining the parameter. But I think it's unnecessary to do that for EVERY parameter. If it's already obvious what the parameter is for, the javadoc comment is redundant; you're just creating extra work for yourself. Furthermore, you're creating extra work for anyone who has to maintain your code. Methods change over time, and maintaining comments is nearly as important as maintaining your code. How many times have you seen a comment like "X does Y for Z reason" only to see that the comment is out-of-date, and in fact the method doesn't even take X parameter anymore? It happens all the time, because people forget to update comments. I would argue that a misleading comment is more harmful than no comment at all. And thus is the danger of over-commenting : by creating unnecessary documentation, you're making more work for yourself and everybody else, you're not helping anybody understand your code, and you're increasing the likelihood that the code will have out-of-date comments at some point in the future. However, I respect the other developer on my team, and accept that perhaps he is right and I am wrong. Which is why I bring my question to you, fellow developers : Is it indeed necessary to write a javadoc comment for EVERY parameter? Assume here that the code is internal to my company, and won't be consumed by any outside party.
Javadoc (and, in the Microsoft word, XMLDoc) annotations are not comments , they are documentation . Comments can be as sparse as you want them to be; assuming your code is halfway readable, then ordinary comments are merely signposts to aid future developers in understanding/maintaining the code that they've already been staring at for two hours. The documentation represents a contract between a unit of code and its callers. It is part of the public API. Always assume that Javadoc/XMLdoc will end up either in a help file or in an autocomplete/intellisense/code-completion popup, and be observed by people who are not examining the internals of your code but merely wish to use it for some purpose of their own. Argument/parameter names are never self-explanatory. You always think they are when you've spent the past day working on the code, but try coming back to it after a 2-week vacation and you'll see just how unhelpful they really are. Don't misunderstand me - it's important to choose meaningful names for variables and arguments. But that is a code concern, not a documentation concern. Don't take the phrase "self-documenting" too literally; that is meant in the context of internal documentation (comments), not external documentation (contracts).
{ "source": [ "https://softwareengineering.stackexchange.com/questions/75593", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/24173/" ] }
75,607
If JSON stands for JavaScript Object Notation, then when you say JSON object, aren't you really saying "JavaScript Object Notation Object"? Would saying "JSON string" be more correct? Or would it be more correct to simply say JSON? (as in "These two services pass JSON between themselves".)
JSON is notation for an object. Not an object itself. A "JSON Object" is a String in JSON notation. That's not redundant. Saying "JSON String" would be more clear than "JSON Object". But they would mean the same thing. "JSON Object" can be shorthand for "JSON-serialized Object". It's a common-enough elision of confusing words.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/75607", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/23366/" ] }
75,648
In my brief time as a professional programmer I've seen lots of applications written by programmers who's entire education appears to have been reading the first couple of chapters in a .NET 2.0 book. Heck when I started I wrote most of those applications! What are the biggest design patterns crucial for writing AWESOME .NET applications? By awesome I mean on the inside too!
First: Know your basic tools well Know the ASP.Net event model. You'll get in a mess if you don't. Understand the mechanics of OO. A surprising number of relatively experienced .Net programmers still seem to think it is 1972. Start reading Code Complete. Second: Learn to separate concerns The most common design-crime I see in ASP.Net development is to stuff all the business logic in the code-behind. I know that all the Microsoft examples do it that way. I know it is justified on small apps. And I know I sometimes do it that way. But really, it is bad design, and is my pet hate for the week. Third: Learn everything else about design Most of the poor quality .Net code that I see is the result of poor OO design. Therefore, I'd recommend a good understanding of: SOLID principles GoF Design Patterns MVC (for ASP.Net MVC) Fourth: Get to know more tools You know how Microsoft make things easy by providing lots of out-of-the-box tools? Well, you're going to hit their limitations sooner or later. When you do, you're either going to have to bend them to your will or roll your own. Either way, you're going to have to get-down-dirty with some CSS and Javascript. Finally Once you've done that lot, you're well on your way to awesome. [Edit: Fixed-up the sequence for learning this sutff. Apparenty I couldn't count yesterday...]
{ "source": [ "https://softwareengineering.stackexchange.com/questions/75648", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/13637/" ] }
75,866
I'm a consultant at one company. There is another consultant who is a year older than me and has been here 3 months longer than I have, and a full time developer. The full-time developer is great. My concern is that I see the consultant making absolutely terrible design decisions. For example, M:M relationships are being stored in the database as a comma-delimited string rather than using a conjunction table to hold the relationships. For example, consider two tables, Car and Property: Car records: Camry Volvo Mercedes Property records: Spare Tire Satellite Radio Ipod Support Standard Rather than making a table CarProperties to represent this, he has made a "Property" attribute on the Car table whose data looks like "1,3,7,13,19,25," I hate how this decision and others are affecting the quality of my code. We have butted heads over this design three times in the past two months since I've been here. He asked me why my suggestion was better, and I responded that our database would be eliminating redundant data by converting to a higher normal form. I explained that this design flaw in particular is discussed and discouraged in entry level college programs, and he responded with a shot at me saying that these comma-separated-value database properties are taught when you do your masters (which neither of us have). Needless to say, he became very upset and demanded I apologize for criticizing his work, which I did in the interest of not wanting to be the consultant to create office drama. Our project manager is focused on delivering a product ASAP and is a very strong personality - Suggesting to him at this point that we spend some time to do this right will set him off. There is a strong likelihood that both of our contracts will be extended to work on a second project coming up. How will I be able to exert dominant influence over the design of the system and the data model to ensure that such terrible mistakes are not repeated in the next project? A glimpse at the dynamics: I can be a strong personality if I don't measure myself. The other consultant is not a strong personality, is a poor communicator, is quite stubborn and thinks he is better than everyone else. The project manager is an extremely strong personality who is focused on releasing tomorrow's product yesterday. The full-time developer is very laid back and easy going, a very effective communicator, but is someone who will accept bad design if it means not rocking the boat. Code reviews or anything else that takes "time" will be out of the question - there is no way our PM will be sold on such a thing by anybody.
From my experiences I would say as having been a long time contractor myself, 20+ years, generally when you are a contractor, you aren't there to affect change, you are there to be a warm body filling a seat and doing what you are told, unless your manager mandates something different specifically. Don't get invested If they don't see the huge mistake this is, then don't worry about it. Make some comments in your code contributions about how this is bad and needs to be refactored to actually support joining against those values in a sane amount of time, and then forget about it. It might even end up on thedailywtf.com and make you anonymously famous! Start thinking about how you are going to make sure your next contract position will be so much better than your current one! You now have an experience to help you gauge the next set of people you will be working with next time you interview, to detect these type of personalities in the future. You not only want to be aware of the contractors negative personality, but also your managers negative personality. If he is going to "blow up" at someone who is trying to make him look better and avoid problems, you need to learn how to spot people like him in the future so you can avoid them as well. You should not have apologized Now the other contractor will feel justified in his belief he is correct, which he isn't correct. Any basic Relational Database Theory book will shoot down this naive incorrect solution in the second chapter, if not sooner. Multi-value fields are a clear sign that someone doesn't comprehend what they are doing, they aren't worth your time arguing with. If you really want to do something to make yourself feel better Document your conversation with this person, why it is so wrong, what problems it will cause and your proposed solution. Give this to the salaried employee and your manager. Don't propose that you change it right now if you have a deadline, but make it really clear that is isn't going to scale and will definitely make your manager look bad in the near future. That way when you have moved on, you can feel good in the fact that you at least notified them of the disaster waiting to happen.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/75866", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/25139/" ] }
75,919
Often, in libraries especially, packages contains classes that are organized around a single concept. Examples: xml, sql, user, config, db . I think we all feel pretty naturally that these packages are correct in the singular . com.myproject. xml .Element com.myproject. sql .Connection com.myproject. user .User com.myproject. user .UserFactory However, if I have a package that actually contains a collection of implementations of a single type - such as tasks, rules, handlers, models, etc. , which is preferable? com.myproject. tasks .TakeOutGarbageTask com.myproject. tasks .DoTheDishesTask com.myproject. tasks .PaintTheHouseTask or com.myproject. task .TakeOutGarbageTask com.myproject. task .DoTheDishesTask com.myproject. task .PaintTheHouseTask
Use the plural for packages with homogeneous contents and the singular for packages with heterogeneous contents. A class is similar to a database relation. A database relation should be named in the singular as its records are considered to be instances of the relation. The function of a relation is to compose a complex record from simple data. A package, on the other hand, is not a data abstraction. It assists with organization of code and resolution of naming conflicts. If a package is named in the singular, it doesn't mean that each member of the package is an instance of the package; it contains related but heterogeneous concepts. If it is named in the plural (as they often are ), I would expect that the package contains homogeneous concepts. For example, a type should be named TaskCollection instead of TasksCollection , as it is a collection containing instances of a Task . A package named com.myproject.task does not mean that each contained class is an instance of a task. There might be a TaskHandler , a TaskFactory , etc. A package named com.myproject.tasks , however, would contain different types that are all tasks: TakeOutGarbageTask , DoTheDishesTask , etc.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/75919", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/2314/" ] }
75,956
After reading Should package names be singular or plural? it occurred to me that I've never seen a proper debate covering one of my pet peeves: naming implementations of interfaces. Let's assume that you have a interface Order that is intended to be implemented in a variety of ways but there is only the initial implementation when the project is first created. Do you go for DefaultOrder or OrderImpl or some other variant to avoid the false dichotomy? And what do you do when more implementations come along? And most important... why?
Names have the opportunity to convey meaning. Why would you throw away that opportunity with Impl? First of all, if you will only ever have one implementation, do away with the interface. It creates this naming problem and adds nothing. Even worse, it could cause trouble with inconsistent method signatures in APIs if you and all other developers aren't careful to always use only the interface. Given that, we can assume that every interface has or may have two or more implementations. If you have only one right now, and you don't know in what way the other may be different, Default is a good start. If you have two right now, name each one according to its purpose. Example: Recently, we had a concrete class Context (in reference to a database). It was realized that we needed to be able to represent a context that was offline, so the name Context was used for a new interface (to maintain compatibility for old APIs), and a new implementation was created, OfflineContext . But guess what the original was renamed to? That's right, ContextImpl (yikes). In this case, DefaultContext would probably be ok, and people would get it, but it is not as descriptive as it could be. After all, if it's not offline , what is it? So we went with: OnlineContext . Special case: Using the "I" prefix on interfaces One of the other answers suggested using the I prefix on interfaces. Preferably, you don't need to do this. However, if you need both an interface, for custom implementations, but you also have a primary concrete implementation that will be used often, and the basic name for it is just too simple to give up to an interface alone, then you can consider adding "I" to the interface (though, it's completely fine if it still doesn't sit right for you and your team). Example: Many objects can be an "EventDispatcher". For the sake of APIs, this must conform to an interface. But, you also want to provide a basic event dispatcher for delegation. DefaultEventDispatcher would be fine, but it's a bit long, and if you are going to be seeing the name of it often, you might prefer to use the base name EventDispatcher for the concrete class, and implement IEventDispatcher for custom implementations: /* Option 1, traditional verbose naming: */ interface EventDispatcher { /* interface for all event dispatchers */ } class DefaultEventDispatcher implements EventDispatcher { /* default event dispatcher */ } /* Option 2, "I" abbreviation because "EventDispatcher" will be a common default: */ interface IEventDispatcher { /* interface for all event dispatchers */ } class EventDispatcher implements IEventDispatcher { /* default event dispatcher. */ }
{ "source": [ "https://softwareengineering.stackexchange.com/questions/75956", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/7167/" ] }
76,021
I program on my desktop in my office, but also sometimes at home in a different room on my laptop, and even away from home. What I need is a system that automatically or on-demand syncs my work from one to the other, at need. I do not have a home network setup, and although I guess I could do it, that would be a question for another board, perhaps. I've thought about some kind of system that would keep the source code in the cloud, but I don't know enough about this to get started. I need a kind of free or cheap way to do this. I work in .NET (Windows Phone 7, in fact).
The easiest way is to use one of the online systems. Checkout GitHub or BitBucket . For more information on Git or Mercurial, check out Git Reference and Hg Init , respectively.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/76021", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/1351/" ] }
76,052
Is there any reason to build constraints between tables (inside SQLserver) nowadays? If so, when? Most applications in my area are built on object principles and tables are joined on demand. Demand are based on the need from the application. I won't load a bunch of constrained tables for a simple look-up, which in turn (after action) require one another simple lookup. ORM tools such EntityContext, Linq2Data, NHibernate are also handle the constraints by themselves, at least you know what tables that need each other. Doing constraints inside the server is just about making (forcing) same changes twice? This is usually not a question up for decision, but this database is designed quite different. The design looks regular good, mostly mirrored the objects used by the applications. What disturbing me is all constraints configured inside SQLserver with "not cascade". Which means that you have to play "seek and find" when coding new database queries. Some cases requires up to 10 levels of an exact order to make a single delete. This surprises me and I'm not sure how to handle it. In my simple world, that setting makes the constraints losing most of the purpose. OK if the database were accessed from hosts without knowledge of the design. How would you act in this scenario? Why not just remove all constrains from db and keep them at application level?
Two general reasons not to remove contraints from DB : It may be accessed by more apps, now or in the future , which may or may not use ORM. Even if the developers of those apps faithfully duplicate all the constraints there (which may be significantly more difficult using lower level non-ORM solutions), it is always extra work. And if not, even one small omission is enough to break schema integrity ... which is something you don't want to risk. In most companies, the data stored in their DB is the lifeblood of their business, so its integrity must be ensured by any means. And the tried and proven best means to achieve this is to implement as many constraints in the DB as possible. The query optimizer relies a lot on the constraints known on the DB level. If you remove constraints, query performance may start deteriorating . You may not immediately notice it, but one day it is going to hit you, and by then it may be too late to fix it easily. The nature of things is that DB performance tends to break down at peak load time, when there is the least possibility to make careful, well thought out design improvements, backed by exact performance measurements and detailed analysis to pinpoint the root causes. Your concrete case sounds like the DB schema may have been originally generated by an ORM tool (or designed by someone not very experienced with the relational world), so it is suboptimal from the relational point of view. It is probably better to analyse and improve it towards a more "natural" relational design, while keeping it consistent with the ORM views. It may be useful to involve a DB expert in this analysis.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/76052", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/20636/" ] }
76,229
I've read in numerous sources that the output of PHP's rand() is predictable as its a PRNG, and I mostly accept that as fact simply because I've seen it in so many places. I'm interested in a proof-of-concept: how would I go about predicting the output of rand()? From reading this article I understand that the random number is a number returned from a list starting at a pointer (the seed) -- but I can't imagine how this is predictable. Could someone reasonably figure out what random # was generated via rand() at a given moment in time within a few thousand guesses? or even 10,000 guesses? How? This is coming up because I saw a auth library which uses rand() to produce a token for users who have lost passwords, and I assumed this was a potential security hole. I've since replaced the method with hashing a mixture of openssl_random_pseudo_bytes() , the orignal hashed password, and microtime. After doing this I realized that if I were on the outside looking in, I'd have no idea how to guess the token even knowing it was a md5 of rand().
The ability to guess the next value from rand is tied to being able to determine what srand was called with. In particular, seeding srand with a predetermined number results in predictable output ! From the PHP interactive prompt: [charles@charles-workstation ~]$ php -a Interactive shell php > srand(1024); php > echo rand(1, 100); 97 php > echo rand(1, 100); 97 php > echo rand(1, 100); 39 php > echo rand(1, 100); 77 php > echo rand(1, 100); 93 php > srand(1024); php > echo rand(1, 100); 97 php > echo rand(1, 100); 97 php > echo rand(1, 100); 39 php > echo rand(1, 100); 77 php > echo rand(1, 100); 93 php > This isn't just some fluke. Most PHP versions * on most platforms ** will generate the sequence 97, 97, 39, 77, 93 when srand 'd with 1024. To be clear, this isn't a problem with PHP, this is a problem with the implementation of rand itself. The same problem appears in other languages that use the same (or a similar) implementation, including Perl. The trick is that any sane version of PHP will have pre-seeded srand with an "unknown" value. Oh, but it isn't really unknown. From ext/standard/php_rand.h : #define GENERATE_SEED() (((long) (time(0) * getpid())) ^ ((long) (1000000.0 * php_combined_lcg(TSRMLS_C)))) So, it's some math with time() , the PID, and the result of php_combined_lcg , which is defined in ext/standard/lcg.c . I'm not going to c&p here, as, well, my eyes glazed over and I decided to stop hunting. A bit of Googling shows that other areas of PHP don't have the best randomness generation properties , and calls to php_combined_lcg stand out here, especially this bit of analysis: Not only does this function ( gettimeofday ) hand us back a precise server timestamp on a silver platter, it also adds in LCG output if we request "more entropy" (from PHP's uniqid ). Yeah that uniqid . It seems that the value of php_combined_lcg is what we see when we look at the resulting hex digits after calling uniqid with the second argument set to a true value. Now, where were we? Oh yes. srand . So, if the code you're trying to predict random values from doesn't call srand , you're going to need to determine the value provided by php_combined_lcg , which you can get (indirectly?) through a call to uniqid . With that value in hand, it's feasible to brute-force the rest of the value -- time() , the PID and some math. The linked security issue is about breaking sessions, but the same technique would work here. Again, from the article: Here's a summary of the attack steps outlined above: wait for the server to reboot fetch a uniqid value brute force the RNG seed from this poll the online status to wait for target to appear interleave status polls with uniqid polls to keep track of current server time and RNG value brute force session ID against server using the time and RNG value interval established in polling Just replace that last step as required. (This security issue was reported in an earlier PHP version (5.3.2) than we have currently (5.3.6), so it's possible that the behavior of uniqid and/or php_combined_lcg has changed, so this specific technique might not be workable any longer. YMMV.) On the other hand, if the code you're trying to product calls srand manually , then unless they're using something many times better than the result of php_combined_lcg , you're probably going to have a much easier time guessing the value and seeding your local generator with the right number. Most people that would manually call srand also wouldn't realize how horrible of an idea this is, and thus aren't likely to use better values. It's worth noting that mt_rand is also afflicted by the same problem. Seeding mt_srand with a known value will also produce predictable results. Basing your entropy off of openssl_random_pseudo_bytes is probably a safer bet. tl;dr: For best results, don't seed the PHP random number generator, and for goodness' sake, don't expose uniqid to users. Doing either or both of these may cause your random numbers to be more guessable. Update for PHP 7: PHP 7.0 introduces random_bytes and random_int as core functions. They use the underlying system's CSPRNG implementation, making them free from the problems that a seeded random number generator has. They're effectively similar to openssl_random_pseudo_bytes , only without needing an extension to be installed. A polyfill is available for PHP5 . *: The Suhosin security patch changes the behavior of rand and mt_rand such that they always re-seed with every call. Suhosin is provided by a third party. Some Linux distributions include it in their official PHP packages by default, while others make it an option, and others ignore it entirely. **: Depending on the platform and the underlying library calls being used, different sequences will be generated than documented here, but the results should still be repeatable unless the Suhosin patch is used.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/76229", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/2862/" ] }
76,254
If you have the time and resources, what would be the most effective way to increase your salary as a full-time programmer , outside of just doing your job? By "salary" here, I mean salary (adjusted for location cost-of-living) coming from a single programming job.
Change Employers The most efficient way is to learn and do cool stuff and change jobs every year or so. You are far more likely to get more money from a new employer than you are to get a hefty raise from your current employer.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/76254", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/19936/" ] }
76,528
I have developed a fully functional tool which I would like not only to share with anyone interested but also get support from the community. This tool is cross-platform, written in C++ with Qt, the code is well commented but I still lack any documentation. There are also some small issues and improvements to be made before I can call it a stable, final version. What are the first steps that I have to take to release code as open-source and attracting people interested in contributing? This is my first serious attempt to release open-source code and I really don't know where to start. Should I just push it to Github put together a small wiki and pray for the best?
Choose your licence If your code has been closed source up to now, the first thing you should do is decide on which open source license ( GPL <=2, GPL 3 , LGPL , BSD , Eclipse etc.) you want to use. There are pro's and cons to each, so read up on what restrictions they place on the code and decide who you want to be able to use it. Warning , whichever you choose someone will complain - this is holy war territory, and beyond the scope of this question. A great resource for determining which license is the right license for you is the very comprehensive, interactive license differentiator , from Oxford Universities OSS Watch . Sanitise your repository If the code in your repository doesn't already have your chosen license applied to it, I would go through your entire revision history so far and retroactively apply it (this may require a re-base at every point where a new source file is introduced). This will, however produce a nice clean repository which, when you release it to the public, has no revisions where your chosen licence isn't in force. Another option is to start your public repository at the point of your first release, with minimal or no history up to that point. This has the disadvantage that people can't go back through your history and work out how you got to where you are today, but it has the advantage that people can't go back through your history and work out how you got to where you are today. *8') When the company I work for made the software I work on open source, we started by only producing snapshots of the working directory at release points. When we moved to using public github repositories we started each git repo at the point that a plug-in was (or set of plug-ins were) moved out of svn , rarely including any history at all. Consider a Dual License If you think there might be commercial interest in using your software, but have an ideological preference for a restrictive re-use license such as GPL 3, consider offering dual licensing. Offering GPL 3 licenses for public download, and commercial licenses for a fee gives you the best of both worlds. Doing this from the outset is likely to cause less friction than starting to offer commercial licenses later on. If your community becomes popular, people may accuse you of selling out if you weren't straight about the possibility of commercial exploitation later. Consider a Contributor Agreement If you ever plan to dual licencse, or simply want to to keep your codebase yours, you will need to either re-implement contributed fixes yourself or get contributors to assign rights over their contributions to you. Otherwise you will find that their contributions prevent you from releasing your codebase under other licenses. The answer by Mason Wheeler to the question Does open source licensing my code limit me later? provides some good information on this and how the libsdl project used to deal with this problem. Do be aware though that just as your choice of license may restrict the people and organisations who will use and contribute to your project, so will your choice of whether to have a contributor agreement. Some people will not be happy to contribute to a project which requires them to sign a contributor agreement. Dual License Contributor Agreements The Oracle Contributor Agreement (links are hard to see on that page) is a good template for a contributor agreement. It is also licensed (CC) in a way that you can modify it and re-use it.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/76528", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/12824/" ] }
76,627
When we say that "Dennis Ritchie developed C language", do we mean that he has created a compiler (using an 'already' developed other language) which can compile the source code written in C language? if yes, what was language he used to write first C compiler? I understand a compiler is a program and we can create another compiler for C language using presently available C compiler. Is that correct?
From wiki : Ritchie is best known as the creator of the C programming language and a key developer of the Unix operating system, and as co-author of the definitive book on C. Also from wiki : The first C compiler written by Dennis Ritchie used a recursive descent parser, incorporated specific knowledge about the PDP-11, and relied on an optional machine-specific optimizer to improve the assembly language code it generated. The first C compiler was also written by him, in assembly. This page from bell-labs answers most of your questions.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/76627", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/25374/" ] }
76,638
How exactly is portability of a language like C is determined? I've learned that compilers are ISA specific. If this is true, how is C portable? Or is it that just the source code written in C is portable but not the executables? Aren't the executables ISA specific for examples applications for x86 are separate from the applications for Apple (assuming Apple uses Motorola/PowerPC microprocessor)?
is it that just the source code written in C is portable not the executables? Correct. Some people call it write once, compile everywhere. http://en.wikipedia.org/wiki/Write_once,_compile_anywhere . The other alternative is write once, run everywhere. Java is a good example of this. http://en.wikipedia.org/wiki/Write_once,_run_anywhere And even though you can achieve partial cross platform portability, you should never expect your code to run everywhere without modifications.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/76638", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/25374/" ] }
76,645
I have a dilemma. I have a candidate for a senior software developer position. The guy seems competent on a first talk with him and he answered the questions asked precisely and gave me proofs of his work. Moreover he has been highly recommended by some trusted colleagues. In this case I am tempted to skip the technical test HR requires as I need to fill the vacancy asap. Please share your experience. EDIT: Against a better judgement I have given the test. Top scores on almost all questions, even on subject that he did not boasted upon. But I had so support some irony from him when he saw the questions of the test - which were clearly not for a senior. So we made an offer. Thanks all for the insights.
As usual... It Depends I have never seen a technical test that proved competence. I have seen a lot of technical tests that demonstrated ignorance - both on the parts of the test-maker and the test-taker. How much confidence do you have in the technical test? Have you taken it? Do you think it's fair? Confidentially, I took an online technical test as a favor for a client a while back (they wanted my scores as a 'baseline' for new hires) and failed it - mostly because the test questions consisted solely of syntax and function names for a specific version of a specific language. I use the language all the time, and have for years, but not those specific features . These were all things that I could look up when/if needed - and as such were utterly irrelevant to skill/competence. So it really depends on the test. If you think your technical test is significant then by all means administer it. If you don't then get rid of it . Your impression based on a personal interview plus recommendations from trusted colleagues are far more valuable than any test .
{ "source": [ "https://softwareengineering.stackexchange.com/questions/76645", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/1487/" ] }
76,659
I have seen a lot of people claiming themselves to be a "software consultant". These consultants do what a normal software developer does, write code, estimate tasks, fix bugs and attend meetings etc. The only difference being the financials, consultants end up earning more. Then how is a software developer different from a "consultant"? In addition to the main question, I would like to know how can a software developer become a consultant? Are there any specific guidelines for a consultant? Do they need to amass certifications and write up research papers? Please do not confuse the software consultant with a management consultant. Software consultants I have seen are not managers.
Here's a list of softies Software developer - is an employee on the full-time payroll and does the job of implementing the requirements for the application. Developers skip around on different projects working as when directed by their employers. Software consultant - is not an employee, and is brought in to provide advice (consultancy) as to how the application should be implemented using current industry approaches. Often the consultant provides technical advice on how to configure a large application (SAP, Oracle etc). Consultants, in my experience, are not generally programmers. Software contractor - is not an employee, and is brought in to provide skills and expertise in current industry approaches. Typically the contractor works on a single project and sees it through to completion, programming as required. They are not under the direction of their employers, although they may assist in other areas as a professional courtesy. How do you become a Software Consultant? Usually as a result of working for a software consultancy that hires you out on a daily basis. Imagine you work for Oracle and some large company needs assistance in setting up middleware. You're a permanent employee working on a contract basis for a third-party. This isn't always the case (see next section), but it is the usual path. How do you become a Software Contractor? Usually as a result of creating your own company and letting recruitment agents know that you're available for work (programming, consulting, both...) . The agency then hires you out on a daily basis, subject to certain contractual terms. You can go direct, but it's much more difficult (the agent's role is to land the client, your role is to provide the expertise).
{ "source": [ "https://softwareengineering.stackexchange.com/questions/76659", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/8711/" ] }
76,822
Random.org provides 200k free random bits (only 6250 32-bit integers!) from the analog world ( cf. ) per IP per day. Does anyone know of an alternative web service that provides more on-demand random bits per day? (paying is OK as long as the price is "within expectations" of 1000×1024bits per usd cent) (random.org's paid service charges 100× this price)
Maybe this one. http://qrng.physik.hu-berlin.de/ From the site: We provide a new quantum random number generator (QRNG) based on the quantum randomness of photon arrival times. It promises provable and long term statistical quality, speed as well as affordability. Our design creates a new quality in the sense that it offers substantially higher bit rates than previous solutions available to the public. This has become possible by exploiting most recent photon timing instrumentation and state-of-the-art data processing in hardware. In addition to providing high speed (up to 150 Mbits/s over USB), the post-processing algorithm applied to the raw data is based on solid predictions from information theory which guarantee conservation of randomness. This allows for the use of the delivered random numbers in unconditionally secure encryption schemes. [...] Access Policies None of the served data is delivered more than once, neither to a single user nor across > independent users. Using the service is free of charge, but requires registration.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/76822", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/24257/" ] }
76,825
Do you find writing dead code useful? Some says "If you have 2 logic to do some operation then instead of making other logic code commented or removing code make it dead code as it will not affect the operation." Example:- if(true){ // logic - 1 } else { // logic - 2 // dead code } Is it true? I usually do not write dead codes instead I just remove the second logic.
IMO, it is worse than pointless. In addition to being a waste of time, it gives you (or the next guy) the illusion that you've got some code that will work if you change true to false . It is only an illusion ... unless you test it. And if course, it is clutter that makes the code harder to read.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/76825", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/15798/" ] }
76,858
I'm a student (yet to go to uni) and I've been programming for about 5 years now. Over that time, I've flitted around from language to language, from API to API, and project to project. I've tried setting myself on one thing, but I lose interest. My entire PC is full of half finished projects (and at least four times as many have been formatted off my disk). I start to wonder if my lower motivation towards programming for the fun of it (it is one activity I really enjoy) is because I never get a finished project at the end. I can't tell if I just have too open ended goals, or just a low attention span. I tried doing some smaller projects just to finish them, but they do not interest me at all. Could this be a cause for my drop in motivation? Also, when I apply to go to uni (and in the future hopefully, a software job) is it likely to be an issue? In summary: How important is it to finish side/hobby projects, be it for career, motivation, or education?
Obviously finishing a project is important in the "real world" as unless the project is completed and delivered you don't (or your employer doesn't) get paid. However, for hobby and learning projects it's a little more complicated. Having finished projects demonstrates to potential employers that you can deliver what you set out to deliver, but it depends on what you mean by "project". If you are doing the project with a view to having a complete product that showcases all of your skills and may be something you want to make money out of eventually then yes you should finish it, or at least show that it is actively being worked on. If you are doing the project to learn specific things (how to stream video, password encryption, what ever) then once you have that aspect working it's less vital to have a fully functional product as you have effectively completed the project. What you should have is something that can you can show prospective employers to demonstrate your skills. For example, this might just be a web site that says "Welcome back, Joe" after successfully logging in and nothing else - but that's fine as you are showing off the code behind the website that demonstrates you understand password encryption and secure connections etc.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/76858", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/9972/" ] }
76,939
There are many sites on the Internet that require login information, and the only way to protect against password reusing is the "promise" that the passwords are hashed on the server, which is not always true. So I wonder, how hard is to make a webpage that hashes passwords in the client computer (with Javascript), before sending them to the server, where they would be re-hashed? In theory this doesn't give any extra security, but in practice this can be used to protect against "rogue sites" that don't hash your password in the server.
Why isn't it used? Because it's a lot of extra work for zero gain. Such a system would not be more secure. It might even be less secure because it gives the false impression of being more secure, leading users to adopt less secure practices (like password reuse, dictionary passwords, etc).
{ "source": [ "https://softwareengineering.stackexchange.com/questions/76939", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/1671/" ] }
77,009
The codebase I work with daily has no automated tests, inconsistent naming and tons of comments like "Why is this here?", "Not sure if this is needed" or "This method isn't named right" and the code is littered with "Changelogs" despite the fact we use source control. Suffice it to say, our codebase could use refactoring. We always have tasks to fix bugs or add new features, so no time is put aside to refactor code to be better and more modular, and it doesn't seem to be a high priority. How can I demonstrate the value of refactoring such that it gets added to our task lists? Is it worth it to just refactor as I go, asking for forgiveness rather than permission?
"It's better to ask forgiveness than permission" is true. Why worry about it? Just refactor the most horrible parts. You already know what the most costly errors are, right? If you don't, then step 1 is to positively and unambiguously define the most costly, complex, error-ridden, bug-infested problem code. Identify the number of trouble tickets, hours of debugging, and other very specific, very measurable costs . Then fix something on that list of high cost problems. When you have to ask forgiveness, you can point to cost reductions. In case you aren't aware, refactoring requires unit tests to prove that the before-and-after behaviors match. Ideally, this should be an automated, coded test, for example, a unit test. This means pick one thing. Write a unit test. Fix that one thing. You've made two improvements. (1) wrote a test and (2) fixed the code. Iterate.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/77009", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/25484/" ] }
77,091
My customer recently discovered what is URL Rewriting, without completely understanding what it is, how it works and the pros and cons of it. Now, he asks for lots of strange changes in actual requirements of current projects and changes in old projects in order to implement what he believes is URL Rewriting. On one hand, I'm annoyed being asked to do things which doesn't make any sense instead of doing real work. On the other hand, I can't tell my customer that he doesn't understand anything in the subject despite his interest in it. I think many people have had situations when their manager or their customer just learned a new buzzword or a new technology, and he loved it so much than he wanted to use it in every project, everywhere, rewrite the whole codebase just to use this new thing, etc. Also, I've recently read something related on Programmers.SE where people told about their experiences when there was a huge buzz around XML, and some managers would ask to introduce XML in every project just to show to everyone that they have used it. So those who have been in similar situation, how have you managed it?
IMO you should have the "You don't understand URL rewriting" discussion with your client. Obviously you should not bluntly tell your client, "You don't understand". Instead, I would start off with, "Before we invest anything, I think we should discuss X to make sure we're on the same page about what the pros and cons of X and it's alternatives are." If it turns out that he actually does know the things that you do, but wants to implement X anyway, then you ask him what color he wants it. You need to make sure you choose your wording very carefully. After all, there is a chance (however insignificant) that he knows more about X than you do (and there's the obvious point - you're talking to managment ), so make sure you rid yourself of any condescending tones.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/77091", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/6605/" ] }
77,102
I'm still at Microsoft TechEd , and the response to my question about how to effectively use my time at software conferences was overwhelmingly "networking is the most useful part of software conferences". Problem: I have no idea how to even approach that task. I've always been kind of an introvert. At school and at work I've generally not had issues because there are enough extroverts around that approach me that I've made some awesome friends over the years. However, at conferences, it seems most are introverted like myself, and those who aren't seem to be salespeople. The couple of times I've felt okay approaching people it's been after a session where there's been healthy discussion throughout the whole room, and just when I get the nerve to go up and talk to some people, they leave and go on to other things. Is there a specific books I should read? Advice I can take? Anything as far as approaching people one does not know? 'Cause every time I try I just feel like an awkward mess. :( (Oddly enough, I don't have problems speaking to a group of people -- it's the one-on-one things that trip me up :P) (Oh, and by the way, if anyone from here is also there and would like to meet to talk about things, I'm game :P) So, how does an introverted programmer like myself network effectively at software conferences?
You need to do three things... Start conversations - and the best way to do this is to introduce yourself to someone and ask questions . Most folks love to talk about themselves and their opinions - even introverts! Here are a few you can try... What's your favorite session so far? Where are you from? Do you use [a technology discussed at the conference]? Where do you work? What did you think of this session? (If it's at the end of a session.) How did you decide to come to this conference? That will usually begin a discussion. If not, excuse yourself, wander off, and try somebody else. Find people to talk to. As Mike Brown said, don't eat alone - find somebody sitting by themselves and ask if you can join them, or just plop down in an empty seat at a table. If you find a session interesting, go to the front afterward and hang [with] the folks who gather to talk to the speaker, and find some interesting person in that group and when they leave, walk with them and ask them questions. Practice, practice, practice. Just walk up to people and ask friendly questions. Lots of them are introverted, too, and would be very happy if you'll break the ice - and it will help you get past your introversion. I was pretty introverted in high school, and I decided to change that in college. So my first few weeks as a freshman, I'd walk into the cafeteria and do pretty much what I suggest above - sit down with some lone person and ask, "What year are you?" "What are you studying?" etc. After doing that now for almost thirty-five years (yeah, I'm an old dude now), nobody would call me an introvert, and I can hold up my end of most conversations - especially at technical conferences, where other people share common interests.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/77102", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/886/" ] }
77,146
I want to gather some arguments as to why letting a developer testing his/her own work as the last step before the product goes into production is a bad idea, because unfortunately, my place of work sometimes does this (the last time this came up, the argument boiled down to most people being too busy with other things and not having the time to get another person familiar with that part of the program - it's very specialised software). There are test plans in this case (though not always), but I am very much in favor of making a person who didn't make the changes that are tested actually doing the final testing. So I am asking if you could provide me with a good and solid list of arguments I can bring up the next time this is discussed. Or to provide counter-arguments, in case you think this is perfectly fine especially when there are formal test cases to test.
As others (and yourself) have noted, developers should unit test their own code. However, after that, any nontrivial product should also be tested by independent person(s) (QA department and/or the client herself). Developers normally work with the developer mindset of "how to make this work?" . A good tester is thinking about "how to break this?" - a very different mindset. Unit testing and TDD does teach developers to change hats to some extent, but you shouldn't rely on it. Moreover, as others have noted, there is always a possibility of misunderstanding requirements. Therefore final acceptance tests should be conducted by someone as close to the client as possible .
{ "source": [ "https://softwareengineering.stackexchange.com/questions/77146", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/8626/" ] }
77,313
Usually I just throw my unit tests together using copy and paste and all kind of other bad practices. The unit tests usually end up looking quite ugly, they're full of "code smell," but does this really matter? I always tell myself as long as the "real" code is "good" that's all that matters. Plus, unit testing usually requires various "smelly hacks" like stubbing functions. How concerned should I be over poorly designed ("smelly") unit tests?
Are unit test smells important? Yes, definitely. However, they are different from code smells because unit tests serve a different purpose and have a different set of tensions that inform their design. Many smells in code don't apply to tests. Given my TDD mentality, I would actually argue that unit test smells are more important than code smells because the code is just there to satisfy the tests. Here are some common unit testing smells: Fragility : do your tests fail often and unexpectedly even for seemingly trivial or unrelated code changes? State Leak : do your tests fail differently depending on, for instance, what order they are run? Setup/Teardown Bloat : Are your setup/teardown blocks long and growing longer? Do they perform any sort of business logic? Slow Runtime : Do your tests take a long time to run? Do any of your individual unit tests take longer than a tenth of a second to run? (Yes, I'm serious, a tenth of a second.) Friction : Do existing tests make it difficult to write new tests? Do you find yourself struggling with test failures often while refactoring? The importance of smells is that that they are useful indicators of design or other more fundamental issues, i.e. "where there's smoke, there's fire". Don't just look for test smells, look for their underlying cause as well. Here, on the other hand, are some good practices for unit tests: Fast, Focused Feedback : Your tests should isolate the failure quickly and give you useful information as to its cause. Minimize Test-Code Distance : There should be a clear and short path between the test and the code that implements it. Long distances create unnecessarily long feedback loops. Test One Thing At A Time : Unit tests should only test one thing. If you need to test another thing, write another test. A Bug Is A Test You Forgot To Write : What can you learn from this failure to write better, more complete tests in the future?
{ "source": [ "https://softwareengineering.stackexchange.com/questions/77313", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/24062/" ] }
77,373
Now that I've done a few trivial things with Scala (which I love for "hello world" and contrived applications!) I am left wondering.. part about maturity of the tools to support development, and part about general applicability. Are the toolsets ready? Is Scala appropriate for use on enterprise / business applications? Would "you" use it on a non-trivial project? Some of my (possibly unfounded) concerns would be: are the IDE and toolsets as rich as what we have to develop .net and java applications (eclipse for Scala seems limited compared to eclipse for java)? are the build / CI / testing toolsets able to effectively deal with Scala? how maintainable is the concise code that can be (encouraged?) written in the language? is it possible to find developers with Scala experience? is there enough critical mass to get help through on-line reference and books that are more than "intro" to the language? So bottom line - is the ecosystem mature enough to use now, or better off waiting to see how it evolves? EDIT: let's say "non-trivial" is a multi-year, multi-release, 10-20 developers project.
While it is true that Scala has been used in the wild at the Guardian and at Twitter, there is one fundamental concern. Much of Java's popularity comes from the fact that it is relatively easy to read and maintain. Scala has an problem here as it can be written in many different styles. OO style vs functional style is the obvious split here, but it gets more complicated when you talk about the 3 levels of Scala . You need to make sure that your team and any potential new hires can all follow the same style, and that the style is simple enough for you to actually be able to hire developers that can be effective (not everyone can hire the top 2%). Tooling support is also not quite there yet, although I expect this gap to be closed fairly quickly. You can also get support for the full Scala stack from the TypeSafe crowd. I think Scala will carve out its niche, but until the levels are actually built into the language/compiler/whatever, I see a maintenance headache coming down on teams after the initial 1-2 years of excited productivity. See this related answer for more details.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/77373", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/25019/" ] }
77,436
this page http://golang.org/doc/go_faq.html writes: although Go has static types the language attempts to make types feel lighter weight than in typical OO languages So my question is exactly is it safely typed with generics (like C#) or loosely typed (like javascript) or optional (like option strict in Vb.Net)
Type safety is not a black-or-white type-safe or not. It's more of a spectrum and some languages can be more type safe than others (and vice versa). However, I think what you're thinking of with C# vs. Javascript is likely static typing (where type-checking happens at compile-time) vs. dynamic typing (where type-checking happens at run-time) -- certainly, that's what the Go FAQ is talking about. Google Go is statically typed, but a number of features make it "appear" to be (at least somewhat) dynamically typed. For example, you do not need to explicitly mark your class as implementing any interfaces. If the method signatures of your class match up with those on the interface, then your class automatically implements that interface (a kind of duck-typing). This is useful for extending built-in classes and classes in third-party libraries, because you can just make up your interface to match the methods on the third-party class and it will automatically implement it. Type safety is actually a different "axis" of the type system. For example, C is a statically-typed language that is not type-safe -- pointers let you do pretty much anything you like, even things that will crash your program. Javascript is dynamically typed, but is also type-safe: you can't perform operations that will crash your program. C# is mostly type-safe, but you can explicitly mark areas of code that are unsafe and do things which are no longer type safe. Google Go is also type-safe in the sense that you can't mess around with types and crash the program (no direct access to pointers).
{ "source": [ "https://softwareengineering.stackexchange.com/questions/77436", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/24257/" ] }
77,450
Many of you out there work for large companies that ship well-known software. I was wondering, how much of original code (basically, code that was "v1.0" release) is left in modern massive applications, such as, say, Firefox, Photoshop, Windows, Linux, etc? I'd really prefer first-hand experience and real-world war stories. Thanks for satisfying my curiosity. EDIT Turns out there's a degree of misunderstanding. What I'm after is basically as follows: when you blame / annotate source code, are there any parts or even whole files untouched since the initial 1.0 release.
More than you'd expect and much older than you'd expect. Even with "total rewrites" and big refactors there are many modules that stay untouched. Péter suggest that for example you won't find old Netscape code in Firefox. Which is wrong, if you search through source code you'll find quite a few files with disclaimer like: * The Initial Developer of the Original Code is * Netscape Communications Corporation. * Portions created by the Initial Developer are Copyright (C) 1994-2000 For example in Linux kernel headers you can find things like: include/linux/if_ppp.h: * Copyright (c) 1989 Carnegie Mellon University. include/linux/quota.h: * Copyright (c) 1982, 1986 Regents of the University of California. include/linux/coda.h: Copyright (c) 1987-1999 Carnegie Mellon University include/linux/mc146818rtc.h: * derived from Data Sheet, Copyright Motorola 1984 (!). include/net/slhc_vj.h: * Copyright (c) 1989 Regents of the University of California.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/77450", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/4994/" ] }
77,529
It's stable and has a time complexity of O(n). It should be faster than algorithms like Quicksort and Mergesort, yet I hardly ever see it used.
That O(f(n)) really means in order of K*f(n), where K is some arbitrary constant. For radix sort this K happens to be quite big (at least order of number of bits in the integers sorted), on the other hand quicksort has one of the lowest K among all sorting algorithms and average complexity of n*log(n). Thus in real life scenario quicksort will be very often faster than radix sort.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/77529", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/25668/" ] }
77,530
A co-worker and myself are having a debate on whats best. Both concepts work and work well but is there a general consensus on not making a call to exit? Whats better? To call exit within a procedure to avoid doing the rest of the code as in ... if Staging.Current = nil then exit DoSomethingA(FileNameA); DoSomethingB(FileNameB); Staging.DeleteCurrent; or to not call exit and instead wrap it in a begin and end if Staging.Current <> nil then begin DoSomethingA(FileNameA); DoSomethingB(FileNameB); Staging.DeleteCurrent; end; Both work. I prefer to use the exit statement since it results in fewer lines of code and looks cleaner to me, but is there any reason or any consensus among programmers to avoid using exit to leave a procedure?
This is a religious war type of question. For reference the definitive Stack Overflow discussion is here: https://stackoverflow.com/questions/36707/should-a-function-have-only-one-return-statement Many people object to methods that have multiple exit points. They would argue that it makes it harder to reason about a method's behaviour. On the other hand, others take the point of view that once a method has completed its work it is reasonable for it to quit. Those holding this viewpoint would argue that a C return statement, e.g. return 42 is clear and reasonable anywhere in a function. I personally feel that there is a clear distinction between code that exits willy nilly from many different points, and code as presented in your question which is known as a guard clause . One of the great advantages of guard clauses is when you have multiple tests. The code written without guard clauses results in significant indentation which most people agree is to be considered harmful. It is my perception that the consensus opinion is that guard clauses are better than any alternative.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/77530", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/16532/" ] }
77,637
Possible Duplicate: What “version naming convention” do you use? I am currently debating between the traditional versioning convention [major].[minor].[revision] and my own, almost whimsical, [YYYY].[MM].[DD].[hh][mm] for a new project I am starting. I understand that [major].[minor].[revision] is probably the most popular versioning method on the planet and it is indeed pretty straightforward and reasonable, except that determining which changes merit the label "major", "minor" or even "revision" could be... subjective . A versioning system based on a timestamp is purely non-subjective and guarantees uniqueness. Which one would you choose for your project and why?
Why not combine them: [major].[minor].[YYYYMMDDHHMM] That way you get an easy way of showing the version (the available feature) and additionally a method of seeing when it was built. This method also allows you to have two different versions out in the field (Ver 1/Ver 2) simultaneously.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/77637", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/25695/" ] }
77,757
Technically, Is there a difference between these two words or can we use them interchangeably? Both of them more or less describe the logical sequence of steps that follow in solving a problem. ain't it? SO why do we actually use two such words if they are meant to talk of the same? Or, In case if they aren't synonymous words, What is it that differentiates them? In what contexts are we supposed to use the word pseudo code vs the word algorithm?
Wikipedia's definition of an Algorithm: In mathematics and computer science, an algorithm is an effective method expressed as a finite list of well-defined instructions for calculating a function. Algorithms are used for calculation, data processing, and automated reasoning. Algorithms can be described in various ways, from pure mathematical formulas to complex graphs, more times than not, without pseudocode. Pseudocode describes how you would implement an algorithm without getting into syntactical details. So no, they're not really synonymous.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/77757", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/23010/" ] }
77,758
I thought Unicode was designed to get around the whole issue of having lots of different encoding due to a small address space (8 bits) in most of the prior attempts (ASCII, etc.). Why then are there so many Unicode encodings? Even multiple versions of the (essentially) same one, like UTF-8, UTF-16, etc.
Because people don't want to spend 21 bits on each character. On all modern systems, this would essentially mean using three bytes per character, which is three times more than what people were used to, so they were unwilling to adopt Unicode at all. Compromises had to be found: e.g. UTF-8 is great for English text because legacy ASCII files need not be converted at all, but it is less useful for European languages, and of little use for Asian languages. So basically, yes, we could have defined a single universal encoding as well as a single universal character chart, but the market wouldn't have accepted it.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/77758", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/1350/" ] }
77,897
During my first implementation extending the Java collection framework, I was quite surprised to see that the collection interface contains methods declared as optional. The implementer is expected to throw UnsupportedOperationExceptions if unsupported. This immediately struck me as a poor API design choice. After reading much of Joshua Bloch's excellent "Effective Java" book, and later learning he may be responsible for these decisions, it didn't seem to gel with the principles espoused in the book. I would think declaring two interfaces: Collection, and MutableCollection which extends Collection with the "optional" methods would have led to much more maintainable client code. There's an excellent summary of the issues here . Was there a good reason why optional methods were chosen instead of the two interfaces implementation?
The FAQ provides the answer. In short, they saw a potential combinatorial explosion of needed interfaces with modifiable, unmodifiable view, delete-only, add-only, fixed-length, immutable (for threading), and so on for each possible set of implemented option methods.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/77897", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/7251/" ] }
77,904
I have read and used many good Linq to . I have also read that some of the Linq to _ _ are not worth learning as there are better techniques. I was just wondering which of the bunch should I concentrate on and which have more suitable alternatives. Thanks
The FAQ provides the answer. In short, they saw a potential combinatorial explosion of needed interfaces with modifiable, unmodifiable view, delete-only, add-only, fixed-length, immutable (for threading), and so on for each possible set of implemented option methods.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/77904", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/19839/" ] }
78,016
A company with a solid open source project competing against a traditional closed-source product seems impossible to beat. I read this article wherein the author lays out this scenario: Suppose one could divide a software market—say network management—between two products. One did everything possible and cost $1 million, and the other only did 10% as much, but was free and open. The price tag of the commercial solution would automatically filter out a large number of users, and those people would have to turn to open source. But some users would be satisfied with the 10% functionality and choose it outright. For example, I have an original Macintosh computer on my desk. It runs a word processor called MacWrite. It does everything, with the exception of spell check, that I need a word processor to do. I can format paragraphs, choose fonts, make text bold or italic, and even paste in pictures and graphs. All in a "what you see is what you get" user interface. It takes up 76K of disk space. That's "K" as in "kilobyte." Compare that to Microsoft Word. I think the last time I installed just Word it was around 30MB, many times larger than MacWrite, but I don't use it for much more than I use MacWrite. Like me, many users are happy with basic functionality. They don’t need all the bells and whistles. But back to my analogy. In the beginning, the commercial company would probably ignore the open source project. It represents no threat to their revenue stream, so why should they pay attention to an upstart? If this project is healthy and sustainable, however, in a year or so perhaps it does 15%-20% of what the commercial product does. This should bleed a few more users from their business, and maybe now they start to pay attention. Most likely, this attention would take the form of marketing against the project. They would claim it is too small or too underpowered to take seriously. And in the short run this would probably work. But the mere fact that they acknowledged the project would pique interest. Some people would determine for themselves that it was neither too small nor too underpowered and would start using it. Another year or two goes by and now the project is up to 50% of the functionality of the commercial product. People start joining the project in droves. The commercial company now has to do something. What do they do? They add more features. Remember, the commercial product already did 100% of what people needed. So what kind of features could they add? Unnecessary ones. They might change the look of the user interface or add features outside of network management. In any case, this development will cost money, and that will start to eat into the company's margins. Finally, with a healthy community and this influx of new users, the open source project will eventually approach 80%-90% of what the commercial product does. Having exhausted all avenues of generating revenue, the commercial company still has one final option: put the screws to their remaining customers. Find ways to charge them more, to eek out what they can from their investment, which ultimately will drive their clients away. Farfetched? I don't think so. There are only two main requirements: First, find a market where open source provides a compelling alternative, such as network management. Second, build a sustainable community around the open source project. It seems very plausible. If you were the closed-source company, how would you compete??
Since you can't compete on price, then compete on all of the other selling points that the software has: features quality effectiveness integration with other software service support direct selling Basically, you do what every other company does when they're in price competition: keep pace, or change the game.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/78016", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/14969/" ] }
78,152
If experienced programmers actually ever use debuggers, and if so under what circumstances. Although in the answer to that question I said "months" ago I probably meant "years" - I really don't use a debugger. So my specific answerable question is under which circumstances would you, as an experienced programmer, use a debugger?
I would say that not using a debugger is a sign of inexperience. Stepping through code line by line is the best way to trace the flow of execution.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/78152", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/26978/" ] }
78,225
The following SQL queries are the same: SELECT column1, column2 FROM table1, table2 WHERE table1.id = table2.id; SELECT column1, column2 FROM table1 JOIN table2 ON table1.id = table2.id; And certainly result in the same query plans on every DBMS I've ever tried. But every so often, I read or hear an opinion that one is definitely better than the other. Naturally, these claims are never substantiated with an explanation. Where I work, the second version seems to be favored by the majority of other devs, and so I also tend toward that style to minimize surprise. But In my heart, I'm really thinking the first one (since that's how I originally learned it). Is one of these forms objectively better than the other? If not, what would be the reasons to use one over the other?
I find that the second form is better. That may be because that is how I learned it, I'll admit, but I do have one concrete reason - separation of concerns. Putting the fields you are using to join the tables in the where clause can lead to difficulties in understand queries. For example, take the following query: select * from table1, table2, table3, table4 where table1.id = table2.id and table2.id = table3.id and table3.id = table4.id and table1.column1 = 'Value 1' The above query has table joining conditions and actual business logic conditions all combined into a single space. With a large query, this can be very difficult to understand. However, now take this code: select * from table1 join table2 on table1.id = table2.id join table3 on table2.id = table3.id join table4 on table3.id = table4.id where table1.column1 = 'Value 1' In this case, anything having to do with the tables or how they relate is all isolated to the from clause, while the actual business logic for query restriction is in the where clause. I think that is just much more understandable, particularly for larger queries.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/78225", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/8656/" ] }
78,263
This week at work I got agiled yet again. Having gone through the standard agile, TDD, shared ownership, ad hoc development methodology of never planning anything beyond a few user stories on a piece of card, verbally chewing the cud over the technicallities of a 3rd party integration ad nauseam without ever doing any real thinking or due dilligence and architecturally coupling all production code to the first test that comes into anyone's head for the past few months we reach the end of a release cycle and lo and behold the main externally visible feature that we have been developing is too slow to use, buggy, becoming labyrinthinly complex and completely inflexible. During this process "spikes" were done but never documented and not a single architectural design was ever produced (there was no FS, so what the hell eh, if you don't know what you are developing, how can you plan or research it?) - the project passed from pair to pair, each of whom only ever focused on a single user story at a time and well the result was inevitable. To resolve this I went off the radar, went (the dreaded) waterfall, planned, coded and basically didn't swap off the pair and tried as much as I could to work alone - focusing on solid architecture and specifications rather than unit tests which will come later once everything is pinned down. The code is now much better and is actually totally usable, flexible and fast. Certain people seem to have really resented me doing this and have gone out of their way to sabotage my efforts (possibly unconsciously) because it goes against the holy process of agile. So how do you, as a developer, explain to the team that it is not "un-agile" to plan their work, and how do you fit planning into the agile process? (I'm not talking about the IPM; I'm talking about sitting down with a problem and sketching out an end-to-end design that says how a problem should be solved in sufficient detail that anyone who works on the problem knows what architecture and patterns they should be using and where the new code should integrate into existing code)
I think (and I may be going out on a limb here) that ALL projects should have a bit of classic waterfall: The initial analysis and specification phase is essential. You must know what you are doing, and you must have it in writing. Getting the requirements in writing is difficult and time consuming, and easy to do badly. That's why so many skip it - any excuse will do: "Oh we do agile so we don't need to do that." Once upon a time, before agile, it was "oh I'm really clever and know how to solve this, so we don't need to do that." The words have changed a bit but the song is essentially the same. This is of course all bull: You have to know what you are to do - and a specification is the means by which developer and client can communicate what is intended. Once you know what you have to do - sketch out an architecture. This is the "get the big picture right" part. There is no magic solution here, no one right way, and no methodology that will help you. Architectures are the SYNTHESIS of a solution, and they come from partly inspired genius, and partly hard-won knowledge. At each of these steps there will be iteration: you find things wrong or missing, and go fix 'em. That's debugging. It's just done before any code got written. Some see these steps as boring, or not needed. In fact, these two steps are the most important of all in solving any problem - get these wrong and everything that follows will be wrong. These steps are like the foundations of a building: Get them wrong and you have a Leaning Tower of Pisa. Once you have the WHAT (that's your spec) and the HOW (that's the architecture - which is a high-level design) then you have tasks. Usually lots of them. Bust the tasks up however you want, allocate them however you want. Use whatever methodology-of-the-week that you like, or that works for you. And get those tasks done, knowing where you are heading and what you need to accomplish. Along the way there will be false trails, mistakes, problems found with the spec and the architecture. This prompts things like: "Well all that planning was a waste of time then." Which is also bull. It just meant you have LESS foul-ups to deal with later. As you find problems with the high-level early days stuff, FIX THEM. (And on a side issue here: There is a big temptation I've seen over and over to try to meet a spec which is expensive, difficult, or even impossible. The correct response is to ask: "Is my implementation broken, or is the spec broken?" Because if an issue can be sorted out quickly and cheaply by changing the spec, then that is what you should do. Sometimes this works with a client, sometimes it does not. But you won't know if you don't ask.) Finally - you must test. You can use TDD or anything else you like but this is no guarantee that at the end, you did what you said you would do. It helps, but it does not guarantee. So you need to do final test. Thats why things like Verification and Validation are still big items in most approaches to project management - be that development of software or making bulldozers. Summary: You need all the up-front boring stuff. Use things like Agile as a means of delivery, but you can't eliminate old-fashioned thinking, specifying, and architectural design. [Would you seriously expect to build a 25-story building by putting 1000 laborers on site and telling them to form teams to do a few jobs? Without plans. Without structural calculations. Without a design or vision of how the building should look. And with only knowing that it is a hotel. No - didn't think so.]
{ "source": [ "https://softwareengineering.stackexchange.com/questions/78263", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/-1/" ] }
78,353
I'm wondering how far people should take the validation of e-mail address. My field is primarily web-development, but this applies anywhere. I've seen a few approaches: simply checking if there is an "@" present, which is dead simple but of course not that reliable. a more complex regex test for standard e-mail formats a full regex against RFC 2822 - the problem with this is that often an e-mail address might be valid but it is probably not what the user meant DNS validation SMTP validation As many people might know (but many don't), e-mail addresses can have a lot of strange variation that most people don't usually consider (see RFC 2822 3.4.1 ), but you have to think about the goals of your validation: are you simply trying to ensure that an e-mail message can be sent to an address, or that it is what the user probably meant to put in (which is unlikely in a lot of the more obscure cases of otherwise 'valid' addresses). An option I've considered is simply giving a warning with a more esoteric address but still allowing the request to go through, but this does add more complexity to a form and most users are likely to be confused. While DNS validation / SMTP validation seem like no-brainers, I foresee problems where the DNS server/SMTP server is temporarily down and a user is unable to register somewhere, or the user's SMTP server doesn't support the required features. How might some experienced developers out here handle this? Are there any other approaches than the ones I've listed? Edit: I completely forgot the most obvious of all, sending a confirmation e-mail! Thanks to answerers for pointing that one out. Yes, this one is pretty foolproof, but it does require extra hassle on the part of everyone involved. The user has to fetch some e-mail, and the developer needs to remember user data before they're even confirmed as valid.
There is no 100% reliable way of confirming a valid email address other than sending an email to user and and waiting for a response, like most forums. I would go with the simple "@" validation rule and then email the user to confirm their email address. Although, this is my personal opinion... I await other suggestions.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/78353", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/76460/" ] }
78,416
Too often, I can see that there are many viable programmers without college degrees in Computer Science, Informatics, etc. Now that I've been reading more articles about underperforming education and the insignificance of college degrees (especially as a programmer), will a college degree ever hurt my employability? (Also accounting for four years from now when I do graduate) P.S. I'm going to UC Irvine; would the school itself matter in the significance of the degree?
No. The reason it seems like quite a few self-taught programmers "make it big without a degree" is the same as the reason why it seems like all people who make it to 120 lived on cigarettes and bacon and drank a bottle of whiskey every day: exceptions draw a lot of attention . Good self-taught/self-made programmers are actually quite rare. I've inherited codebases in the past that were built by self-taught programmers. Needless to say, atrocities such as hash tables being used as arrays abounded. You don't hear about it because it's pretty much what can be expected - it's only when you see work in real life that was done by people without formal Computer Science knowledge that you can see how much they're missing. Of course, it's a sliding scale (in other words, look at it through a pair of 80/20 goggles), and individually some people can be great - but on the whole - everything else being the same - the smart money is on the person with a degree.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/78416", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/8020/" ] }
78,547
Recently, I started my first job as a junior developer and I have a more senior developer in charge of mentoring me in this small company. However, there are several times when he would give me advice on things that I just couldn't agree with (it goes against what I learned in several good books on the topic written by the experts, questions I asked on some Q&A sites also agree with me) and given our busy schedule, we probably have no time for long debates. So far, I have been trying to avoid the issue by listening to him, raising a counterpoint based on what I've learned as current good practices. He raises his original point again (most of the time he will say best practice, more maintainable but just didn't go further), I take a note (since he didn't raise a new point to counter my counterpoint), think about it and research at home, but don't make any changes (I'm still not convinced). But recently, he approached me yet again, saw my code and asked me why haven't I changed it to his suggestion. This is the 3rd time in 2--3 weeks. As a junior developer, I know that I should respect him, but at the same time I just can't agree with some of his advice. Yet I'm being pressured to make changes that I think will make the project worse. Of course as an inexperienced developer, I could be wrong and his way might be better, it may be 1 of those exception cases. My question is: what can I do to better judge if a senior developer's advice is good, bad or maybe it's good, but outdated in today context? And if it is bad/outdated, what tactics can I use to not implement it his way despite his 'pressures' while maintaining the fact that I respect him as a senior?
First, as a senior developer, I expect the juniors I lead on projects to bring their concerns to me in a straight forward and direct manner. If they disagree, that's perfectly alright with me. In some cases, I will take action on their concerns. In most cases, their concerns are tossed aside put aside with a short explanation of the reasoning , not out of disrespect for the developer him/herself but because of some other reason such as: The junior doesn't have all of the information at hand to understand the decision as it's been made. In some cases, a little explanation can help the developer move on past the concern and deal with an unideal situation. The junior has BAD information. Don't forget that you are, in fact, a junior. That's the equivalent of being a teenager in software terms. I'm sure you have a lot of great ideas, but it's just possible that you don't know everything. I find the most resistant junior developers are the ones who firmly believe they know what's best for the code, the company, the world. These developers are better served by acquiring humility. The decision to do something a particular way was made above the senior's head. The senior still works for someone else in the end. There may actually be a better way to do something, a more efficient way to write something, or better software/hardware to help do the job. The business still drives the decisions though. Business managers, directors, VPs, etc. often make decisions that impact the development process. These are beyond the control of the senior, and when the juniors complain about it, all they're doing is adding to the senior's stress. The senior just flat out doesn't have time to take it into account. There are deadlines, and sometimes changing patterns, practices, and behaviors midstream is costly to that deadline. Since it's his neck on the chopping block it's often more important to get the product out functional and on time than "perfectly written". These are just the things I can think of off the top of my head. There are a ton of reasons why an idea, practice, concept might be dismissed or discarded by someone higher up than you. Many of them are unpleasant, but they all boil down to the fact that we're all human and we all have opinions. His opinion just happens to be numerically superior to yours at the moment. Bearing those concepts in mind you should continue to bring your concerns to the senior developer. Find another senior developer who may be able to fill in the blanks. Many senior developers are where they're at because they are better with software than with people. Some are where they're at because they knew whose butt to kiss when they were interns. Find one who actually understands what it means to mentor someone and get their honest opinion. They may disagree with you and fill in the blanks you don't have. They may agree with you and help to rally your cause or make your situation better. At no time should you mount any kind of insurrection. Even if you believe in your heart that your way is right, you have been given an instruction to follow and you should follow it (unless it's illegal obviously). If you have trouble following these instructions, you may want to try to reason out why because you're going to discover this pattern of behavior to be very prevalent in very many companies that produce any kind of software. Your best option is to continue to do your job ethically and professionally. Get the software you're asked to do complete in an exemplary fashion and escape the situation by being promoted out of it. If promotions don't come, you'll have plenty of references and experience to pursue opportunities in other departments or companies.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/78547", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/25920/" ] }
78,998
I've added a couple of user stories that address some technical debt to my Pivotal Tracker board. Should I consider them as features (keeping my velocity level) or as chores/bugs (lowering my velocity)? I understand it won't make any difference in the long run if I did one or the other consistently, but every time I add a technical debt story I have to make the decision. Some thoughts: They aren't actually bugs, they don't break anything The users haven't requested anything as it is low-level implementation that doesn't affect them, but it will make long term development easier If you define features as stories that add value to the users, well a) they don't as the users won't see any direct benefit, but then b) they do because they make future development/maintenance possible which does add value, just not right now I'm not deciding whether or not to actually do the work, or when to schedule it, I just what to know what I should call technical debt in my project management tool, and why.
It is a feature. As a [Developer], I want to [refactor the whizbang library] in order to [simplify maintenance and speed execution] It is defined and scheduled and tracked like any other features. If implementing this feature is not sufficiently valuable (to the client or to you) for it to ever be scheduled, that's a different problem.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/78998", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/10022/" ] }
79,041
My first commit in my project resulted in the nightly build being broken and people are all over me as we are nearing the release. I want to send an apology email that should sound sincere and at the same time hinting that this was my first commit and this would not be repeated any more. Being a non-native English speaker, I have difficulties coming up with correct words. Can someone please help?
Don't apologize! Breaking the build once in a blue moon is not a big deal, and should never be a show-stopper. It's your manager's fault for not configuring continuous, automated builds. Also, I bet your team fails the 'Joel Test' and can't make a build in one step. If so, this would be another thing that you shouldn't apologize for. Indeed, it's a team anti-pattern.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/79041", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/7805/" ] }
79,111
I am working in a .Net, C# shop and I have a coworker that keeps insisting that we should use giant Switch statements in our code with lots of "Cases" rather than more object oriented approaches. His argument consistently goes back to the fact that a Switch statement compiles to a "cpu jump table" and is therefore the fastest option (even though in other things our team is told that we don't care about speed). I honestly don't have an argument against this...because I don't know what the heck he's talking about. Is he right? Is he just talking out his ass? Just trying to learn here.
He is probably an old C hacker and yes, he talking out of his ass. .Net is not C++; the .Net compiler keeps on getting better and most clever hacks are counter-productive, if not today then in the next .Net version. Small functions are preferable because .Net JIT-s every function once before it is being used. So, if some cases never get hit during a LifeCycle of a program, so no cost is incurred in JIT-compiling these. Anyhow, if speed is not an issue, there should not be optimizations. Write for programmer first, for compiler second. Your co-worker will not be easily convinced, so I would prove empirically that better organized code is actually faster. I would pick one of his worst examples, rewrite them in a better way, and then make sure that your code is faster. Cherry-pick if you must. Then run it a few million times, profile and show him. That ought to teach him well. EDIT Bill Wagner wrote: Item 11: Understand the Attraction of Small Functions(Effective C# Second Edition)   Remember that translating your C# code into machine-executable code is a two-step process. The C# compiler generates IL that gets delivered in assemblies. The JIT compiler generates machine code for each method (or group of methods, when inlining is involved), as needed. Small functions make it much easier for the JIT compiler to amortize that cost. Small functions are also more likely to be candidates for inlining. It’s not just smallness: Simpler control flow matters just as much. Fewer control branches inside functions make it easier for the JIT compiler to enregister variables. It’s not just good practice to write clearer code; it’s how you create more efficient code at runtime. EDIT2: So ... apparently a switch statement is faster and better than a bunch of if/else statements, because one comparison is logarithmic and another is linear. http://sequence-points.blogspot.com/2007/10/why-is-switch-statement-faster-than-if.html Well, my favorite approach to replacing a huge switch statement is with a dictionary (or sometimes even an array if I am switching on enums or small ints) that is mapping values to functions that get called in response to them. Doing so forces one to remove a lot of nasty shared spaghetti state, but that is a good thing. A large switch statement is usually a maintenance nightmare. So ... with arrays and dictionaries, the lookup will take a constant time, and there will be little extra memory wasted. I am still not convinced that the switch statement is better.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/79111", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/7274/" ] }
79,134
I'm approaching the 1 year mark as a leader of a small development team (4 members, including myself) inside of a small software company. I'd like to give my team the opportunity to evaluate how I am doing as their team leader who is also a developer on the team. I find it's hard to get good feedback with an open ended 'How am I doing?' question, so what specific questions are the most important to ask? Ideally I'd like to be able to provide 3 simple questions that my team would be able to answer. Which are the most important parts that you would like to give your team leader feedback on? My initial thought was to allow my team to answer these questions anonymously? Is this a good idea?
Anonymously is best... However, I would take them out to lunch 1 at a time. I would ask them what they think could be improved at the company and don't talk about yourself (You are the company from their perspective). I think a lot of this depends on how you interact with them. Feed back is best given informally and is best done by you watching what they complain about and over time. If someone stops talking then I'm worried. my 2c.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/79134", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/2830/" ] }
79,221
I have a problem on reporting progress to my employer. I am a part-time programmer, handling a software project for my school's (non-technical) department. Contact person: 1. The staff who actually uses the software and raises feature requests, 2. My boss (non-programmer), and she is not the software's user. The project's nature: It is a ready-made software, which has been bought from third-party. I have to modify or add feature/function to this software in order to cater for department's need. This is a software is need to use throughout the semester. Not all features needs to be used at the beginning. Hence we are using the Agile model: When the staff needs a certain feature, they raises a request, and I make the changes. By the end of the semester, I suppose all the required features will be raised and implemented. The problem: Everytime my boss asked me how the progress, I can't answer, because I don't know how to answer. I don't have complete list of all the required features. Even though I have completed features which were raised last week, I still can't tell my boss I have "completed", because new features are coming in too, and I don't know how much. I can't tell "We have how many % completion" nor "We are going to complete it by xxx". Sometime out of 3 requests, I manage to complete 2, I would tell my boss "I have completed 2, but there is one feature not complete yet". After a long period of time, I sounds like "I always have something not finish, after so long". Being unable to report the progress makes me looks really bad. It's not about how much I've done, it's about how to let people know. If I were the manager, and my staff keep failing to report the progress to me for months, I will feel this guy is incapable too. Do you guys have any idea how to report, or answer question as simple as "what is the status / progress of the software modification"? UPDATE My boss doesn't involve in development task directly, so she doesn't have a clue on what I am doing, or how the program works. We don't meet regularly as she is busy, and I feel it will be waste of time because she is not the main user, she doesn't know the detail of the program. I meet regularly with the staff who uses and knows better about the software. I feel hard to explain the progress to my boss.
This is a common problem when you're a programmer who works independently, and you report to somebody who's not technical. Bosses like that mostly want to be able to figure out a few things: How happy are the users? Are the things the users want getting done? Is what you're doing worth the money you're being paid? An Agile burn-down or anything else like that would be a terrible idea!As you said, your boss is really busy, so they wouldn't have time to learn about it, and probably isn't interested in it anyway. So if I were you, I'd email them a report once a week containing: An "executive summary" at the start: "Finished 3 features this week, and got 2 new feature requests. At the start of this week, there were 11 unfinished feature requests, and at the end there were 10." A feature status list, with a brief sentence each, in three groups: The features you got done during the week The feature requests that came in during the week The other features in the "backlog" A brief discussion of anything that was complicated or unusual, preferably using non-technical language. If I were your boss, and I hadn't been getting any reports, I'd be very happy to get that every week. And if I wanted something different, I'd ask you for it.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/79221", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/26222/" ] }
79,487
Just curious to hear from other people who may have been in similar situations. I work for a small startup (very small) where I am the main developer for a major part of the app they are building, the other dev they have does a different area of work than I do so couldn't take over my part. I've been with the company 5 months, or so, but I am looking at going to a more stable company soon because its just getting to be too much stress, overtime, pressure, etc for too little benefit and I miss working with other developers who can help out on a project. The guy is happy with my work and I think I've helped them get pretty far but I've realized I just don't like being this much "on the edge" as its hard to tell what the direction of the company is going to be since its so new. Also, even though I'm the main dev for the project, I would still only consider myself a mid-level dev and am selling myself as such for the new job search. Just to add more detail, I'm not a partner or anything in the company and this was never discussed, so I just work on a W2 (with no benefits of course). I work at home so that makes it easier to leave, I guess, but I don't want to just screw the guy over but also don't want to be tied in for too long. Obviously I would plan to give 2 weeks notice at least, but should I give more? How should I bring up the subject because I know its going to be a touchy thing to bring up. Any advice is appreciated UPDATE: Thanks everyone for posting on this, I have now just completed the process of accepting an offer with a larger company and quitting the startup. I have given 2 weeks notice and have offered to make myself available after that if needed, basically its a really small company at this point so it would only be 1 dev that I would have to deal with... anyways, it looks like it may work out well as far as me maintaining a good relationship with the founder for future work together, I made it out to be more of a personal / lifestyle issue than about their flaws / shortcomings which definitely seems to help in leaving on a good note
As harsh as it may sound, it's not your problem . The owner should be smart enough to have a contingency plan for key personnel leaving, if not, again it is not your problem. Look at it the other way, if you have agreed to starting at your new job at a particular starting date, you do not want to jeopardise it with delays from your old job. It looks you are going to do the right thing before leaving, eg keep the documentation up to date, knowledge transfer etc.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/79487", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/10087/" ] }
79,586
Where I work developers are always telling me that "I added this just in case for the future" or "I think it's a good idea to do this because they'll probably want it some day". I think it's great that they're proactive in trying to anticipate future changes but I can't help thinking that it's unnecessary and risks writing code that may never be needed and therefore unproductive (I also think that some developers just want to try out something new for the sake of it). Are the arguments for future proofing invalid if you just write good, clean organised code?
Well first of all, there are some things that need clarification: Future proofing is not adding stuff. Future proofing is making sure you can easily add code/features without breaking existing functionality. This means is that writing "future proof" code, is ensuring that the code is written in a loose coupled manner, sufficiently abstract, but also code that does not completely hide abstraction levels, so there's always a way to go to the lower abstraction levels if necessary. Writing future proof code is an art by itself and is tightly coupled with SOLID practices for component versioning, separation of concerns, layering and abstractness of functionality. Future proofing has nothing to do with adding features ahead of time, but with making sure you can add features in the future in a non breaking manner, through the good design of existing code/libraries. My 2c
{ "source": [ "https://softwareengineering.stackexchange.com/questions/79586", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/19799/" ] }
79,895
Is bubble sort the least Big-O efficient? If the answer is no, then what is the least efficient sorting algorithm?
I vote for BogoSort to be the worst if you are comparing based on worst case performance only! Visit this wiki link to get a general idea of run time comparisons of different sorts. The sort performance is always highly dependent on your data and scenario. It's hard to say one to be the worst always.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/79895", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/9095/" ] }
80,045
I was recently rejected from a college that had previously accepted me, on the grounds that I spent a year of high school in a foreign country and the college wasn't interested in recognizing education received in another nation. Because of this a very generous scholarship has dried up, and and financing an education is doubtful. I'm also hesitant to become a part of a system that has demonstrated what I consider to be blatant xenophobia. What I want to do is say "Screw college", strike out on my own, and do something amazing, wow everyone, and become a self made millionaire. The reality of the situation is that I'm two weeks out of high school, I have about the equivalent of an Intro to Programming course worth of self-taught experience (although I am driven to learn and improve), I still need to pay bills, and I have a sneaking suspicion that any employer is going to have a hard time taking me seriously. As I understand it, it's a fairly popular belief that you can make it without a degree, but how does someone like me do that? Would anyone take me seriously if I walked into their office and said "I have no formal education and a minimum of skills, but I want to work and I want to learn. Please give me a job."?
I have been in the same position as you, and I chose that 'screw college' road you speak of. I had a love for software development, a C++ hobby on top of a basic HS programming course, and dreams. Now I am a professional developer, so I'll give you my experience. After going to college for 1 year (I had a full scholarship for technical theatre), I figured out that I liked software more than set building. Year 1 - I started my 'own thing' which consisted of desktop support to pay rent, and developing. Developing anything I could make, for anybody that wanted it, at a fraction of the price. Looking back I was probably doing $20,000 applications for $1,000. Starting out on your own really sucks because even if you did have the experience to know it's a $20K app, you don't have the credibility to ask for it. And worst of all, I have no idea what I don't know, and no other developers around me. I created applications that were maintenance nightmares. I had no skill in architecture or design patterns, so I basically made things that blew up and did network support to pay bills. Lots of Taco Bell, mixed with "well, at least I'm not working for the man". I've got some dreams of apps to write and get out to the world, but that has to come after bills, ya'know? Year 2 - Becoming slightly better developer by learning what not to do and watching things blow up in my face. Barely getting by on desktop support, learning servers and making web sites. It must be easier than this working for the man, but I have no real portfolio so, press on. Year 3 - Starting to get the hang of this. When I hit File > New Project, I have some vague idea of where I want to go and how to build things. Still choosing the wrong architectures, web services seem kinda cool, so why not build EVERYTHING with those? Need a calculator desktop app? I'll build a web service! Starting to pick up a few clients and being the IT guy and some software projects along the way. One thing I did do was create a Offsite Backup service using Web Services, so my dream was to be a 'Mozy' while everybody was still swapping tapes. Broadband was just becoming commonplace so I was ahead of the curve, and this was going to be my million-dollar idea. But the service had problems (due to my lack of architecture skills), I had no connections in the industry so no one ever heard about it except for the couple clients I signed up. Year 4 - Finally, a customer believes in me for a long-term project. I manage to do it without screwing up badly; the code isn't great but it works. Starting to get caught up on bills, I get to work with a few other developers (fake it till you make it, right?) and even answering a few Experts Exchange questions. Oh yeah. Year 5 - If you hadn't noticed by now, those dreams in Year 1 still aren't written, so that's starting to get a little depressing. I have a decent portfolio of stuff I've written successfully, got some decent momentum, and a respectable client base. Still don't really know what I don't know, and breaking even. Years 5 - 8 - I'll combine these since it's more of the same of "do a project, learn a little on each, bring that experience to the next one". Today is in the middle of Year 8, and it's only in the last year or two that I've become a good developer. Those dreams in Year 1 have already been invented many times by someone else. In case you hadn't guessed, I didn't create Mozy. Along the way I've had new dreams and new ideas, and some have been good, some have been horrible. I now have the skills to make them happen, and some of them are happening, and it's exciting. However, I have a feeling if I would have done things differently I could have shortened this journey quite a bit. I can't speak to how differently college changes this journey; I'll leave that to others on this thread. But the pieces of advice I will give: You need to work with other developers. I didn't realize how important this was. You don't know what you don't know until you look at someone else's code or get a horrible code review. Fail before you have major responsibilities. If you really want to go out on your own, try to do it before you get married, have a house payment, kids, etc. You will fail and you will fail many times. Get used to it and value it as it's the best experience ever. But when your killer app that you just spent all your time and money on doesn't have a single customer, it's a lot easier to recover when it's just you. There is absolutely nothing wrong with bootstrapping. If you've got network skills, go work in a Network Operations Center or help desk (something within the realm of IT), and work on becoming a better developer off-hours and on the weekends. Make connections with people at real jobs. You'll need them later. Be 125% sure that you LOVE software development. The passion for software comes before the 'millionaire' part, not the other way around. If you don't have a passion for this, or your heart doesn't start beating a little faster when you hit New Project, go do something else and keep this as a hobby. I'm sure I could go on, but the funny thing is I saw this question while working on one of those dreams and had to answer this one. :) Good luck.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/80045", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/26522/" ] }
80,069
I volunteered to instruct an after school computer club at my son's middle school. There has been a lot of interest in computer viruses. I was thinking about showing them how to create a simple batch file virus that will infect other batch files in the same directory. Also show how creating a batch file with the same name, but that is closer in the path, can replace another program. It could also allow for discussion of anti-virus techniques - recognizing viruses and virus like behavior. I mentioned the idea to my wife and she thought it was a terrible idea. Compared it to giving them loaded weapons. I don't see it as dangerous since this technique wouldn't be immediately applicable for any real mischief on any modern operating systems. Am I being too cavalier or is she being too concerned? This isn't a settle this argument for me question, I am just trying to get another opinion. Update : I don't plan to cover moving between systems (or even directories) or any malicious behavior. And lest anyone think I am revealing any deep dark secrets, here is a book from 1996 I found at the library that goes into a lot more detail than I planned to cover. If some is motivated to be malicious they will find a way.
I recently found a picture of me when I was 12 and reading a book about computer viruses. It was in 1988. Like your students, I was fascinated by them. The next year I started high school and was accused to being the origin of the virus infection of all computers in the school. Of course, it wasn't me. I was good at computers, so the teachers said it was me. If I put myself in back in time, I can tell you that since I was very well informed by the effects of those viruses, I would never have done such a thing. Why I would do that? Harm people? No way! Therefore I think that the more they are informed by the effects , the less likely they use them. But this statement is linked to boys that were like me, in a good environment with strong rules and education. If you teach computer viruses to students with a history of doing bad things, not well educated, or troubled, they will certainly use them to do bad things. So it highly depends on the audience, your students .
{ "source": [ "https://softwareengineering.stackexchange.com/questions/80069", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/433/" ] }
80,084
A colleague of mine today committed a class called ThreadLocalFormat , which basically moved instances of Java Format classes into a thread local, since they are not thread safe and "relatively expensive" to create. I wrote a quick test and calculated that I could create 200,000 instances a second, asked him was he creating that many, to which he answered "nowhere near that many". He's a great programmer and everyone on the team is highly skilled so we have no problem understanding the resulting code, but it was clearly a case of optimizing where there is no real need. He backed the code out at my request. What do you think? Is this a case of "premature optimization" and how bad is it really?
It's important to keep in mind the full quote (see below): We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil. Yet we should not pass up our opportunities in that critical 3%. What this means is that, in the absence of measured performance issues you shouldn't optimize because you think you will get a performance gain. There are obvious optimizations (like not doing string concatenation inside a tight loop) but anything that isn't a trivially clear optimization should be avoided until it can be measured. The biggest problems with "premature optimization" are that it can introduce unexpected bugs and can be a huge time waster. There is no doubt that the grail of efficiency leads to abuse. Programmers waste enormous amounts of time thinking about, or worrying about, the speed of noncritical parts of their programs, and these attempts at efficiency actually have a strong negative impact when debugging and maintenance are considered. We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil. Yet we should not pass up our opportunities in that critical 3%. A good programmer will not be lulled into complacency by such reasoning, he will be wise to look carefully at the critical code; but only after that code has been identified. It is often a mistake to make a priori judgements about what parts of a program are really critical, since the universal experience of programmers who have been using measurement tools has been that their intuitive guesses fail. After working with such tools for seven years, I've become convinced that all compilers written from now on should be designed to provide all programmers with feedback indicating what parts of their programs are costing the most; indeed, this feedback should be supplied automatically unless it has been specifically turned off.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/80084", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/22363/" ] }
80,112
I'm having a rather Oatmealesque experience with a particular client's website. The latest 'feature' they have requested is that background music play automatically when the site loads. What should I say to gently convince them that this is a bad idea?
I would introduce them to A/B testing. Then A/B test this feature. If you're not familiar with it, https://www.google.com/analytics/siteopt/splash?hl=en can set it up for free. Alternately http://visualwebsiteoptimizer.com/ and http://www.optimizely.com/ are easier to use. Or you could learn the nuts and bolts of it, for example from the tutorial I did at OSCON a few years back, http://elem.com/~btilly/effective-ab-testing/ . Odds are good that the A/B test will tell them what you already know. If the A/B test doesn't tell them that, then they may be one of the small minority of websites where this feature actually makes sense.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/80112", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/14102/" ] }
80,136
When I build a simple website, e.g. a contact book where I can add, delete and update contacts, I create an index.php file where a user, if he's not logged in, is requested to enter a password and if he enters the right password, he's assigned a session and can do certain things with the contacts. I have two files: The first ( contacts.php ) is for the HTML code to be shown. Above the HTML code I include the second file and create the class. The second ( contacts_class.php ) contains all methods for adding, deleting and updating. I think that's ok, but when it comes to implement a big project, how should I do it? Do I have to create folders for every page and put files in them (like above, HTML and class), and how should I do it? What is a good and neat architecture for building large projects that every other programmer would understand perfectly?
You raised a very interesting and fundamental question. The question concerning the large scale project architecture and the folder structure organization (which is secondary to the architecture). Today the most common approach to building the CMS framework architecture is the use of MVC pattern. There are some good articles about building your own MVC frameworks, one of them is Build an MVC Framework with PHP . MVC stands for Model, View, Controller. You may call these approaches whatever you like - MVC, HMVC, MVP. The essence is to isolate the individual components of your system. The "Controller" retrieves the data from the "Model" and sends them to "View", which renders the final HTML. You have already implemented the "V" in your contacts.php and "MC" in your contacts_class.php . So you have isolated the view from the model and controller. Now you can easily change your "View" leaving other parts intact. I am not suggesting you to blindly follow the MVC, MVP or whatever else "MV" pattern. It is the matter of appropriateness, efficacy and taste. The common dynamic website application may include such components like: The entry point, say index.php The helper libraries / classes The request router The modules, components or controllers The template engine or maybe single views The real web application may include any other components like event handlers, event dispatchers and hooks, but these are in fact nuances. Well, let me present it the way I want to present it: The common framework operation routine as follows: The browser request is sent directly to the entry point executable / script ( index.php ). The entry point script loads the helper libraries, classes and performs some further initialization of our programming environment. The URL is passed to the request router instance. This step can be the part of step 2. The request router parses the URL and dispatches the operation to a particular component, module or controller. The component (or controller) processes the routed request and sends the data to the view to be rendered. The corresponding project folder structure is shown in the diagram. I would suggest you to investigate how the other frameworks are implemented. The recommended CMS / frameworks to begin with are CodeIgniter, OpenCart, Joomla 1.5 and Tango CMS.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/80136", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/4355/" ] }
80,225
Take this scenario: A programmer creates a language to solve some problem. He then releases this language to help others solve problems like it. Another programmer discovers it's actually much better for some different category of problems. By virtue of this new application, the language then becomes popular for that application primarily. Are there any instances of this actually occurring? Put another way, does the intended purpose of a language have any bearing on how it's actually used, or whether it becomes popular? Is it even important that a language have an advertised purpose?
Lisp . McCarthy originally specified Lisp in a paper to show that few simple notations are enough to build a turing complete language. He was surprised to find that Lisp could be implemented in machine code (Steve Russel did the first Lisp interpreter implementation). Lisp is widely used for AI programming.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/80225", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/2107/" ] }
80,439
I see that most of the apps that include heavy GUI content are usually developed in C++. Most of the games/browsers are coded in C++. Can't we just develop better GUI apps with the latest dynamic languages? I know java wouldn't be a great choice. But what about languages like python which are natively built on C? Aren't the latest languages supposed to be better than their ancestors? Why do we still have to prefer the age old C++ over the latest languages? And I would also like to know, what is it that is responsible in C++, for the better speed of processing GUI? On the other hand what is it that the other latest languages lack?
I'm one of those people who write C++ GUI apps (mostly for windows). With Qt, to be precise. My reasons: I like C++. I'm a freelancer and usually I can choose my tools (lucky me!) In a managed environment you may have a hard time when you need to use some unmanaged code (long-winded WinAPI declarations in C#, anyone?) Fewer dependencies that are more easily deployed More control over everything. RAII (vs. GC). And even if I allocate with new , I rarely ever delete anything explicitly, because I use smart pointers or the QObject hierarchy. C++ is very exciting these days, I can't wait for a compiler to fully support the new standard. Speed (only at the end of the list. I know it's not that important for the GUI itself, but it tends to be speedier because C++ programs don't suffer from the overhead that runtimes, byte code JIT-compiling and similar technologies add to the program.) As you can see, these are mostly personal preferences. I find it important for my work to be enjoyable and C++ provides that to me.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/80439", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/23010/" ] }
80,529
I am an entry level developer with 1 year of experience. I have worked on a large scale project which I have played around 80% of the project work, those 5 months were terrible to me - late nights spent working, even Sundays. i have worked on whole Process Model , doing some of my colleagues work ,DB Design ,client feedback all this but the point is some of my work been owned by my Team Lead & hope now realize why i mention 80% of work is done by me! Now the project is completed, and the client seems completely satisfied with the work. But, I haven't found the company to give any sort of encouragement / appreciation. My seniors who where not involved in the project were given praise, leave, bonuses, etc. Also I was rejected permission to attend an important family function - that makes me ask "what's the credit I have now"? I have been wondering, is being honest/dedicated to the job what resulted in this situation? I have currently gotten 3 offers with good package - I've been thinking to move on to any of the companies now. What's your suggestion at this point of time?
Taking what you said at face value, and assuming that the seniors didn't spend their nights and weekends fixing or rewriting the code that you wrote ;-) ... ...there is no reason to stay where your work goes unrecognized and unrewarded. Caveat: do not take career advice from strangers on the Internet.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/80529", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/26678/" ] }
80,625
I understand the reasoning behind Joel Spolsky's article " Things You Should Never Do, Part I ", but I always see it referenced in regards in situations where the end goal is the production of software. What if I'm a developer who maintains an ecommerce site? My job isn't writing a retail platform, but instead putting it to use. In fact, this wouldn't even be a re-write, as such, but a big database and web design transition. The software our site is based on is written in classic ASP, and is fundamentally missing many features that customers expect from a current shopping site. Rather than continue to add these features in piecemeal, my gut feeling is that I should start to transition to a more modern platform. We would lose the customizations that we've made over the years, but frankly, many of this features already exist (and have have almost certainly been implemented better!) in the package that I'd like to switch to. Am I falling victim to the spirit of Netscape, or am I right in thinking my time is better spent in places besides making our tools do what we need? To clarify, this is the equivalent of switching blogging platforms for us. Any "development" that I do is essentially rewriting the front end of our website, while the back end is out of my control. Suppose WordPress development had stopped years ago, and was missing "modern" features like commenting, static pages, clean permalinks, etc. Sure, I could write a plug-in to add those things, but after a while, wouldn't it be better to switch platforms to something that had all those (needed) features built in from the beginning?
Before You Upgrade... It could be a good idea if, and only if: it adds features that your customers requested , it don't lose any existing features over the current solution, (some users will ask form them, and if you're a small company, leaving customers are very costly - and might start a wave) it doesn't add invisible/hidden requirements and side-effects, it doesn't come with environmental constraints (or consider them in calculations), your new target middleware is actively developed and supported , and will be so for a (long) while, you carefully consider the costs (and see a relatively short-term benefit afterwards) of: development, deployment, training, maintenance & support, maintenance & support, maintenance & support. Make sure that it won't be harder to maintain than it is now. Plus, you do mention your current technology, but what you'd want to move to. It's even more dangerous if you shift technologies. If You Decide to Upgrade... Be sure to: backup your data, rehearse a business recovery plan ...: to revert to your current working status as quickly as possible if things go awry, to communicate with customers about the (failed) transition and issues, and the status of their data. If you upgrade (and if possible) do it iteratively! Update your database or create a new one, copy over your data, update your backups regularly to work on recent copies, and backup the old copies as well (safely encrypted, etc...) Develop on a separate environment , test internally a lot , test with a focus group of some trusted customers. If a complete switch of middleware is necessary, then see if there are any success stories of migrations between these 2 software solutions, and read carefully what other people have done and what pitfalls and roadbumps they encountered. Refactor and Monitor Quality If it's only (or almost entirely) about messy code, don't do it . Instead: Read my answer on how to organize uncommented, dirty code (it applies to specific code snippets, but also to larger codebases, and the recommened material could help!) Just refactor over time, Use continuous integration and continuous inspection systems to monitor quality and the impacts of the refactoring, And try to reach out to customers to clearly ask what features they are missing and build a very sound business case around each of them to know if they're worth it for small improvement projects. Development is costly, maintenance even more, and building stuff for no particular reason might be nice to please users but remember that you will need to maintain and support it. Also, if you're not a software company, do you have enough trained staff to support this task during development and after deployment? What if some of your staff leaves in the middle of the task?
{ "source": [ "https://softwareengineering.stackexchange.com/questions/80625", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/26698/" ] }
80,751
We’re roughly midway through our transition from waterfall to agile using scrum; we’ve changed from large teams in technology/discipline silos to smaller cross-functional teams. As expected, the change to agile doesn’t suit everyone. There are a handful of developers that are having a difficult time adjusting to agile. I really want to keep them engaged and challenged, and ultimately enjoying coming to work each day. These are smart, happy, motivated people that I respect on both a personal and a professional level. The basic issue is this: Some developers are primarily motivated by the joy of taking a piece of difficult work, thinking through a design, thinking through potential issues, then solving the problem piece by piece, with only minimal interaction with others, over an extended period of time. They generally complete work to a high level of quality and in a timely way; their work is maintainable and fits with the overall architecture. Transitioning to a cross-functional team that values interaction and shared responsibility for work, and delivery of working functionality within shorter intervals, the teams evolve such that the entire team knocks that difficult problem over. Many people find this to be a positive change; someone that loves to take a problem and own it independently from start to finish loses the opportunity for work like that. This is not an issue with people being open to change. Certainly we’ve seen a few people that don’t like change, but in the cases I’m concerned about, the individuals are good performers, genuinely open to change, they make an effort, they see how the rest of the team is changing and they want to fit in. It’s not a case of someone being difficult or obstructionist, or wanting to hoard the juiciest work. They just don’t find joy in work like they used to. I’m sure we can’t be the only place that hasn’t bumped up on this. How have others approached this? If you’re a developer that is motivated by personally owning a big chunk of work from end to end, and you’ve adjusted to a different way of working, what did it for you?
I will say that there are very few software shops that are fortunate enough to have the rare distinction where Agile truly doesn't make sense as a methodology. If your entire team consists of truly exceptional software developers with a deep understanding of aspects of the business and longevity with the company and each other, and if your business requirements and client needs are typically always similar and rarely subject to changes in the middle of a release then you are fortunate to work in such a rare environment where Agile can probably actually HURT. It is designed to be the most effective approach amidst chaotic and constantly changing business requirements and customer needs, evolving or changing project resources, and tight or shifting deadlines. Such an environment spells certain doom for typical waterfall development as it is vulnerable to team changes mid project, vulnerable to changing requirements, and extremely vulnerable to a changing date. I feel for your valuable team members who don't find joy in their work anymore. They very well may be exceptionally talented people who engross themselves in their work but in the end, a number of factors outside of their best control can still kill the project. The best way to approach feature development is for them to lose the individual attitude and expression and think in terms of the team approach. If you find that this won't work for them then you can find special use for them. If they are exceptionally talented and experienced, see if they would be interested in an architectural role, laying out high level designs, approaches, experimenting with new technologies and evolving best practices. Have these people control and review design documentation. If this doesn't suit them still then perhaps have them work seperately on extremely complex technical refactorings on a seperate branch, hugely involved prototypes and proof of concepts, or other trailblazing work that sometimes needs to be done but doesn't fit well in the scope of a single project or release.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/80751", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/24488/" ] }
80,826
Given that the current employer knows and has been given appropriate notice ahead of time, how can a programmer make a clean job transition from his old job? What things should the programer consider? How should one tie up unfinished projects? For example, should one make a list of places they have password access, or a master password list to hand over? When I say "clean transition", I mean where one would leave the company without leaving any messes, drama, or headaches, and still enabling the company to hire someone to replace you and continue work without problems. (For example, suddenly disappearing and not returning is not a clean transition, nor is encrypting all one's code into types of food).
It is advisable to organize your work during the entire time in that fashion that your sudden demise would not wreak havoc on the company operation. One should always keep this in mind. Keep things clearly and logically structured, report your progress regularly and check things in promptly. What you could do specifically in your last days: Prepare a list of credentials which are supposedly unknown to those to replace you Prepare a status report concerning your projects and their status Document any open and outstanding issues that you intended to take care of at some future point of time Ask your superiors if there is something specific they want you to document Ask your superiors if there is any tutoring they want you to give to anybody involved If you were involved in dealing with customers and external parties it might make sense to notify them of their upcoming contact change. But ask for a permission first, sometimes your superiors don't want the customers be notified of people leaving. That pretty much covers it.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/80826", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/11340/" ] }
80,944
I graduated with a BS in compsci last September, and I've been trying (unsuccessfully) to find a job as a project manager ever since. I fell in love with software engineering (the formal practice behind it all, not just coding) in school, and I've dedicated the last 3-4 years of my life to learning everything I can about project management and gaining experience. I've managed several projects (with teams around 12 people) while in school, and I worked with my university's software engineering research lab. My résumé is also decent - I worked as a programmer before I went to school (I'm 27 now), and I did Google Summer of Code for 3 summers. I also have general "people management" experience via working as the photo editor for my university's newspaper for 2 years. My first problem with the job hunt is not getting enough interviews. I use careers.stackoverflow.com , which is awesome because I usually get contacted by non-HR people who know what they're talking about, but there's just not enough companies using it for me to get interviews on a regular basis. I've also tried sites like monster.com, and in a fit of desperation, I sent out no less than 60 applications to project management positions. I've gotten 3 automated rejection letters and that's it. At least careers.stackoverflow gets me a phone interview with 8/10 places I apply to. But the main (and extremely frustrating) problem is the matter of experience. I've successfully managed projects from start to finish (in my software engineering classes we had real customers come in with a real software need and we built it for them), but I've never had to deal with budgets and money (I know this is why HR people immediately turn me away). Most of these positions require 5+ years PM experience, and I've seen absurd things like 12+ years required. Interviews are also maddening. I've had so many places who absolutely loved me and I made it to the final round of interviews, and I left thinking things went extremely well and they'd consider me. However, when I check in with them a week later, they tell me "We really liked you and your qualifications are excellent, but we're hoping to find someone with more experience." The bad interviews I can understand - like the PM position that would have had me managing developers both locally and overseas - I had 3 interviews with them and the ENTIRE interview process was them asking me CS brainteasers and having me waste time on things like writing quicksort on paper or writing binary search trees. Even when I tried steering the discussion towards more relevant PM stuff, they gave me some vague generic replies and went back to the "We want to be Google/MS" crap. But when I have a GOOD interview, they say my "qualifications are excellent" but they want "more experience"...that makes me want to tear my hair out. What else can I DO? While I'm aiming for technically-involved PM positions (not just crunching budget numbers), I really don't want a straight development job because I like creating software from the very high-level vs. spending a lot of time debugging memory leaks. In fact, I can't even GET development positions that I'm qualified for because I make the mistake of telling them that my future career goals are as PM (which usually results in them saying something like "Well we already have PMs and this position isn't really set up to get you there." - which I take to mean "No, that's my job, stay away.") My apologies on the long rant, but I'm seriously hellbent on getting hired as a PM since it's both my career goal and the passion that keeps me awake at night. Any suggestions on what the heck else I can do? I'm currently writing a blog where I talk about my philosophies about software engineering, and I'm writing up specs for an iOS app which I will design, code, and show employers, but this takes an awful lot of time that I don't have.
Experience is (Often) Key Unfortunately, while this may be very frustrating, your skills and the knowledge of project management that you acquired at university or during your previous projects appears insufficient to a lot of people; I know I would be cautious. I can understand the frustration, but there's always the danger that, while you seem like a potentially good project manager for a "perfect situation" , there's no way to assess how you'd react when facing difficult times - let alone a crisis. And that's a doubt that, unfortunately, is only vanquished by the re-assurance that you've had enough years to break your teeth on these problems. Your biggest problem is, quite simply and paradoxically, that you have never failed, yet!! We cannot evaluate what your limits are, what makes you run, what makes you tick, what throws you off. It's Not You: It's Everybody Programmers looking for graduate jobs have actually the same problem, and it applies to many other professions as well: our industry is not that special. Newly graduated programmers will have very good skills: in some areas, even a lot sharper than a lot of their seniors. Similarly, you may well be more up to date with modern management techniques than older managers. Yet again, those new programmers don't have the right gut feelings, the habit of listening to the wooshing sound of deadlines flying by ( D. Adams ) faster than you had expected and the right overview. And in your case, you probably would have situations that you would have a hard time dealing with. Or, at least, that's what people assume. It's frustrating, but not much you can do. Interviewing Now the worrying part is more about interviewing. You say you had phone interviews, but how many face-to-face interviews? If you haven't had enough interviews, then that's a much bigger red-flag . It means there's really something going wrong during the phone interviews and you need to reflect on that. You seem to assume about what went wrong and put them off. Did you ask for feedback? Did the recruiters have any recommendations? They're sharks who tick checkboxes (to HR-people reading this: not you. You're awesome. We love you. Keep it up...), but they do that for a living. They may not understand what you do, but they know what the companies want (or think they want...). Ask for their help. About getting turned down during interviews because you mention that you want to become a PM... you should keep saying that! Maybe you'll get turned down but you won't end up in a place where you don't want to be. On the other hand, if you can accommodate with another role for a while, maybe you should bite the bullet and get some more experience there. Before becoming a PM, maybe being a development lead wouldn't be as bad. But ultimately, even from dev lead to PM, the switch is pretty difficult. But First... Getting Those Interviews! Also, if you didn't get that many phones interviews via other channels than Careers.SO, then here's what I'd recommend. Certifications and Training Get a certification for a big, old fashioned boring methodology (I can hear angry comments coming in by waves... /me bracing for impact ). For instance, look at governmental positions in your area and the kind of certifications they require, and take a course and get certified for those. Maybe you don't like them, but they'll get you more respect from those who have them or want them. Take training courses on other development methodologies, for instance in Agile development. Become a SCRUM Master, maybe. Take training courses more oriented towards actual business management, involving managing budgets and team resources. It's a bit (very?) scary that you'd think you can handle that part of the job if you've never done it before. You will be responsible for loads of money, and people's positions and lives. How their resumes will look in the future partly depends on you as well. Do not take that lightly. Find a Good Ramp to Jump-Start your Career: Startups Find a startup or get some experience in a venture or something. A lot are less regarding on experience, because they often are started by young technologists and have limited budgets so they'll be OK with settling. Do examine it carefully before you join though, because nothing's worse for a young project manager than a first project that tanked. Even though that's common with startups, you want to be able to show that, after only a few years there, you have demonstrated your leadership skills. That it was your strategic guidance that was one of the keys to success. Do you have any acquaintances who work for local, relatively small not-for-profit organization or charity? These can be good starting points. Recruitment Agencies, Job Sites and Alternatives Use different agencies. Online websites like Monster are fine, but your CV is harvested by everybody, or you apply to positions directly where 400 other people already applied. Use other channels. Did you try to...: Use Google Search/Bing/Other to land their "jobs" or "career" pages? Use Google Maps and appropriate keywords to find companies of interest and ? Target different industries. Some industries have more targeted recruitment techniques. For instance, the IT-for-finance market has for some reasons agencies that are very active. They tend to work for business people with deep pockets, so they try their best to really find the best. They follow up with you, they prepare you for interviews, they tear your resume apart and help you rebuild it if they think you're worth it. You may dislike what they suggest, but for their industry, consider they know best. And for other industries, you'd have picked up a few tricks. Get Some Exposure and Contacts Network!! Use your Facebook, your Twitter, your carrier-pigeon. Everything!! Carry resumes around. Attend events, conferences. Attend job fairs. Build up a portfolio Get recommended on LinkedIn Be ultra active on Project Management.SE ! Fine-tune and focus your resumes and cover letters for each job Prepare a visual presentation (powerpoint, video, funky website) of: your projects and achievements maybe even a fictional project to present If you're applying for a location where you're not already... That's even harder. You want to be a manager. Which means you need to be a people's person. You need to communicate, drive points through, be authoritative but also a good listener. I cannot evaluate that over the phone. So, if you cannot go where you're future money should be so you can be on-site for interviews, networking is even more important. You need recommendations. Consider Other Positions/Roles Consider being a managing assistant first. You'll get first-hand mentoring, some connections, build up your experience and portfolio, and have the opportunity to both take initiatives and make mistakes, with a lesser degree of responsibility and liability. Consider Dev Lead or QA Lead positions. Consider a marketing-related position. You will be more in touch with business aspects. Basically, consider that any interview you get for a position, even if it's not the right one for you, gets you some contacts at recruitment agencies AND inside some companies. I often recommended candidates who weren't right for the role we were trying to fill to other departments, if I had a hunch they would be good candidates. Don't Give Up 60 applications are nothing, if you don't spend a lot of time preparing them. Research and prepare each one. Follow up with the recruiters/employers on LinkedIn, Xing, Viadeo or other professional networks. Be creative! Harsh Possibility: It Might Just NOT Happen (Right Now) You are still very young and just graduated. That year is one of the first things people will see, right after noticing that you have almost 0 professional experience. Sell any previous PM-related experience as well as you can, but don't embellish: be prepared to defend everything you state on your resume. I have to tell you, at the risk of demotivating you, that I think (because of the way you write and some of the things you wrote) that you don't have a clear vision of what PM really involves. Or maybe you do, but unfortunately that's not how you come across. You sound unaware and quite idealistic. And if that's also how you come across during the phone interviews, that's a big question mark for your interviewers that they write with a big red marker in the margin of your resume. Sorry, I don't mean to sound judgmental or anything, but the recruitment dance is hard from both sides, and I know I would be very reluctant to hire someone with little experience for a PM position. And even worse, actually: I'd be afraid that the people who will then work under you will question my choice and question your abilities when at the first meeting or the first coffee break you start introducing your previous projects as only summer or university projects (because it seems that's all you have). You even say in a comment to another answer that: [you have leadership experience] both through project management coursework (undergrad and grad level), and managing ~20 newspaper photographers (not SEng, but still management). [You] do realize that doesn't compare to doing it for 5 years at a tech company, but it's not as though [you] have nothing whatsoever under [your] belt Yet it does sound like you don't have that much under your belt. There's nothing that really makes you stand out. Coursework is not experience: it's coursework. And you know what coursework is in most universities (and even some good ones)? Horse-doodoo. That's pretty much it. It's a big fat pile of pointers and references to (often outdated) material, assembled by teachers who may not even be that knowledgeable in the first place or have the experience for the things they pretend to teach you. Managing 20 photographers in a newspaper is something, alright. But what did YOU do? What were your responsibilities and tasks? What was your mission, your appointment? What successes did you achieve? You don't present your case very well here, so it might be that you present it under the same (diffuse) light when you advertise yourself. Did you (or a group you were a part of) found the paper, or were you taking over after a previous team? Did you set up new processes? Train people? Decice of the assignments of each of those? Don't get me wrong. Those experiences are valuable: if you didn't have them on your resume, you probably wouldn't be considered at all. But they don't qualify to get that sort of position. As mentioned above: there might be hundreds of people trying to apply for the PM positions you look at. You may just well be in the upper half of the pile, and the ones without any sort of experience and who are just trying their luck are in the lower half. Now you get to really sell your way through to the top 5 for a face to face interview, and make damn good pitch and show that not only you are good at what you do, but you'll get better and won't get surprised by the things you don't even know.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/80944", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/26789/" ] }
80,962
For a distributed team that uses Git and GitHub as version control, should images also be stored in the git repository? The images in question are small/medium-sized web-friendly images. For the most part, the images won't be changed. The folder containing them will only grow in size as images are added. A concern is that the image folder may grow to a large size over time by combination of large images or a lot of images. Is this considered a best practice? What other alternatives are there to sharing binary files needed in projects that a distributed team can easily access?
Are your images original work or can they be recovered (guaranteed?) from elsewhere? Are they needed to ship a software unit built from source? If they are original, they need backing up. Put them in your revision control, if they never change, the space penalty is the same as a backup, and they are where you need them. Can they be edited to change the appearance of the software, accidentally or intentionally? Yes - then they MUST be revision controlled somehow, why use another way when you have a perfect solution already. Why introduce "copy and rename" version control from the dark ages? I have seen an entire project's original artwork go "poof" when the graphics designer's MacBook hard drive died, all because someone, with infinite wisdom, decided that "binaries don't belong in rev control", and graphics designers (at least this one) don't tend to be good with backups. Same applies to any and all binary files that fit the above criteria. The only reason not to is disk space. I am afraid at $100/terabyte, that excuse is wearing a bit thin.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/80962", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/14/" ] }
81,003
I am a fairly good programmer, my boss is also a fairly good programmer. Though he seems to underestimate some tasks such as multi-threading and how difficult it can be (I find it very difficult for anything more than running a few threads, waiting for all to finish, then return results). The moment you start having to worry about deadlocks and race conditions, I find it very difficult, but the boss doesn't seem to appreciate this - I don't think he has ever come across this. Just slap a lock on it is pretty much the attitude. So how can I introduce him, or explain why he might be underestimating the complexities of concurrency, parallelism, and multi-threading? Or maybe I am wrong? Edit: Just a bit on what he has done - loop through a list, for each item in that list create a thread that executes a database update command based on the information in that item. I'm not sure how he controlled how many threads executed at once, I guess he must have added them to a queue if there were too many running (he wouldn't have used a semaphore).
If you can count on any mathematical experience, illustrate how a normal execution flow that is essentially deterministic becomes not just nondeterministic with several threads, but exponentially complex, because you have to make sure every possible interleaving of machine instructions will still do the right thing. A simple example of a lost update or dirty read situation is often an eye-opener. "Slap a lock on it" is the trivial solution... it solves all your problems if you're not concerned about performance. Try to illustrate how much of a performance hit it would be if, for instance, Amazon had to lock the entire east coast whenever someone in Atlanta orders one book!
{ "source": [ "https://softwareengineering.stackexchange.com/questions/81003", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/6470/" ] }
81,040
Here's my predicament. One of several programs I've recently inherited is built with a horrible database on the backend. The esteemed creators of it apparently did not appreciate relational concepts. A table for each and every client, named as a unique client ID. Eighty-three cryptically named fields. The code is all procedural with dozens of concatenated inline SQL statements. As we weren't provided with an important ancillary application that runs off the same database, I've been tasked with recreating it from scratch. I'm a sole developer, which isn't even my primary responsibility as at least half of my time is taken up by operations stuff. There's an unavoidable deadline set for 30 days from now. Despite my inexperience, I'm certain I could have designed this database and existing application much better than they were, but I don't really think it's realistic for me to alter the database, adjust the existing application, and be sure I didn't break anything while needing to create the additional application this quickly. So let's assume I am stuck with the terrible database. Needing to work with such a bad structure, would anything I write that conforms to it just add to the heaping pile of technical debt to be shelved away until something completely breaks or new functionality is needed? How could I approach this situation and get something good out of it besides a hopefully functional application? edit: In case anyone's interested, we ended up scrapping this horrible database and the application that ran on it. We outsourced the creation of the ancillary application (I wasn't involved in setting this up) to ultimately two different contractors who both ended up falling through on us, accomplishing nothing. I ended up having to rush out a horrific, partially functional hack of a fix in three days that's still in use today.
There's hope, but it's an uphill battle, especially if nobody realizes the database design is horrible. You can try to abstract the nastiness away with abstraction layers, but chances are it won't be worth the battle. My advice would be to create enough abstractions over the database that the application itself is clean and properly designed; that way if you can ever fix the database, the application won't be affected since it doesn't care how the database was designed. This is the approach I normally use when dealing with a database that is in place and, more often than not, designed with zero thought. A few choice applications of the Repository or Gateway patterns, with some service layers to talk to the gateway/repository, should help to quarantine the poor design.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/81040", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/5747/" ] }
81,083
I'm planning to work/start on a few personal projects that could end up as my daily job. It made me think, which way should I start? Just prototype—write just working basic code that could cost me tons of time optimizing and refactoring for easy expansion. Write clean, optimized and documented code from the very beginning, keeping in mind that if after some time it won't be cost-effective, it will be dropped. Update: Combining YAGNI with sunpech and M.Sameer answers makes perfect sense to me :) thank you everyone for help.
There is a third option ... write clean code via test driven development to implement the requirements that are needed today because YAGNI. The temptation to write code that isn't necessary at the moment but might be in the future suffers from several disadvantages ... from You ain't gonna need it : The time spent is taken from adding, testing or improving necessary functionality. The new features must be debugged, documented, and supported. Any new feature imposes constraints on what can be done in the future, so an unnecessary feature now may prevent implementing a necessary feature later. Until the feature is actually needed, it is difficult to fully define what it should do and to test it. If the new feature is not properly defined and tested, it may not work correctly, even if it eventually is needed. It leads to code bloat; the software becomes larger and more complicated. Unless there are specifications and some kind of revision control, the feature may not be known to programmers who could make use of it. Adding the new feature may suggest other new features. If these new features are implemented as well, this may result in a snowball effect towards creeping featurism. As a result, you should not just prototype ... nor should you write clean, optimized and documented code from very beginning, having in mind that if under some time it won't be cost-effective - it will be dropped. Write the code that you need now knowing that you are then able to best meet the needs of today and tomorrow.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/81083", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/12128/" ] }
81,197
I have been reading The Early History of Smalltalk and there are a few mentions of "assignment" which make me question my understanding of its meaning: Though OOP came from many motivations, two were central. The large scale one was to find a better module scheme for complex systems involving hiding of details, and the small scale one was to find a more flexible version of assignment, and then to try to eliminate it altogether. (from 1960-66--Early OOP and other formative ideas of the sixties , Section I) What I got from Simula was that you could now replace bindings and assignment with goals . The last thing you wanted any programmer to do is mess with internal state even if presented figuratively. Instead, the objects should be presented as site of higher level behaviors more appropriate for use as dynamic components . (...) It is unfortunate that much of what is called "object-oriented programming" today is simply old style programming with fancier constructs. Many programs are loaded with "assignment-style" operations now done by more expensive attached procedures. (from "Object-oriented" Style , Section IV) Am I correct in interpreting the intent as being that objects are meant to be façades and any method (or "message") whose purpose is to set an instance variable on an object (i.e. an "assignment") is defeating the purpose? This interpretation appears to be supported by two later statements in Section IV: Four techniques used together--persistent state, polymorphism, instantiation, and methods-as-goals for the object--account for much of the power. None of these require an "object-oriented language" to be employed--ALGOL 68 can almost be turned to this style--and OOPL merely focuses the designer's mind in a particular fruitful direction. However, doing encapsulation right is a commitment not just to abstraction of state, but to eliminate state oriented metaphors from programming. ...and: Assignment statements--even abstract ones--express very low-level goals, and more of them will be needed to get anything done. Generally, we don't want the programmer to be messing around with state, whether simulated or not. Would it be fair to say that opaque, immutable instances are being encouraged here? Or is it simply direct state changes that are discouraged? For example, if I have a BankAccount class, it's OK to have GetBalance , Deposit and Withdraw instance methods/messages; just make sure there isn't a SetBalance instance method/message?
The basic idea (influenced by Sketchpad) is that most variables/values are in dynamic -relationships- with each other (maintained by the interior of the object), so being able to directly reset a value from the outside is dangerous. Because (in Smalltalk anyway) there is at least a setter method required, this allows the possibility of an outside setting action to be mediated by the internal method to maintain the desired interrelationships. But most people who use setters simply use them to simulate direct assignments to interior variables, and this violates the spirit and intent of real OOP. But objects do have "world lines" of changes in time. This can be thought of as a -history- of versions of the object in which the -relationships- are in accord. There are no race conditions in this scheme ... an object is only visible when it is stable and no longer computing. This is like a two phase clock in HW. (Idea from Strachey, and in a different from by McCarthy, and influenced by Lucid.) Best wishes, Alan Kay
{ "source": [ "https://softwareengineering.stackexchange.com/questions/81197", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/17631/" ] }
81,202
One of things that makes Ruby shine is the ability to create Domain Specific Languages better, like Sinatra Rspec Rake Ruby on Rails' ActiveRecord Though one can duplicate these libraries in LISP through macro, I think Ruby's implementation is more elegant. Nonetheless, I think there are cases that LISP's macro can be better than Ruby's, though I could not think of one. So, in what area is LISP's macro better than Ruby's "ability" to create DSL, if any? update I've asked this because modern programming languages are approaching the LISP singularity , like C got macro expansion preprocessor , though very primitive and prone to error C# has attributes, though this is a read-only, exposed through reflection Python added decorator, that can modify the behavior of the function (and class for v 3.0), though feels quite limited. Ruby TMTOWTDI that makes elegant DSL, if care is applied, but in Ruby way. I was wondering if LISP's macro is only applicable to special cases and that the other programming language features are powerful enough to raise the abstraction to meet the challenges today in software development.
Read On Lisp and then decide for yourself. My summary is that Ruby is better at providing convenient syntax. But Lisp wins, hands down, at the ability to create new abstractions, and then to layer abstraction on abstraction. But you need to see Lisp in practice to understand that point. Hence the book recommend.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/81202", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/12773/" ] }
81,221
Context I am looking for work as a contractor in my field of expertise and would prefer to earn less money and have less benefits working as a short-term contractor. The problem: Companies say I need to be an in-house employee for intellectual property reasons. The question: Wouldn't it be easy for them to just have me sign a contract saying all my work is owned by them? If I work as a contractor, they can let me go when my value diminishes and save lots of money. What am I missing here?
It's cheaper Hiring people is generally much cheaper than normal contractor rates, not many companies will come out flat and say this though so they state a number of "non-reasons".
{ "source": [ "https://softwareengineering.stackexchange.com/questions/81221", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/26883/" ] }
81,233
Even as a student I am asked to review the code of programmers who have (not) passed a test (create a list of Fibonacci numbers on android). While I am very strict on coding style I just read about the "block" style someone used (read the comments!) . In my position I would recommend to not hire a guy using this kind of style. The code is completely the opposite of the coding style used in my firm. While searching for coding style and how to deal with a lack of it I am curious about one thing: Should I hire a guy that will have serious trouble adapting the coding style used in the firm? Please: This should not be a discussion about coding style in general and which is better. Its about the importance of coding style for the decision to hire someone! More information: I am not the guy that makes the decision, I just give my opinion based on code. The guy has to pass an interview where our head of whatever checks the soft skills. If he passed this, he has to pass our little skill test and that is where sometimes I am asked to review the written code. I am not in the position to say yes or no. I just want to know how important coding style should be for my review...
How do you know that (s)he will have trouble adapting? Just because they use a different coding style? That's pretty presumptuous. I have been a contractor for a long time, and no matter what coding style is used, you adapt. It may take some time, but the habits form pretty quickly. I do hope that by coding style you do not just mean indentation and layout of the code. That is easily dealt with using a code formatter and integrating that into your version control system. Taking coding style to mean things like naming, general ordering, unit separation, and everything else that deals with readability and maintainability, the most important thing about coding style is that you have one. Not which one. Not having a coding style is a definite red flag. The second most important thing about whatever coding style someone uses, is that they use it consistently. When someone seems to use a coding style, but frequently "sins" against it, that is another definite red flag.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/81233", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/7846/" ] }
81,266
This might be a philosophical kind of question, but I believe that there is an objective answer to it. If you read the wikipedia article about Haskell, you can find the following: The language is rooted in the observations of Haskell Curry and his intellectual descendants, that "a proof is a program; the formula it proves is a type for the program" Now, what I'm asking is: doesn't this really apply to pretty much all the programming languages? What feature (or set of features) of Haskell makes it compliant with this statement? In other words, what are the noticeable ways in which this statement affected the design of the language?
The essential concept applies universally in some fashion, yes, but rarely in a useful manner. To start with, from the type theory perspective this assumes, "dynamic" languages are best regarded as having a single type, which contains (among other things) metadata about the nature of the value the programmer sees, including what these dynamic languages would call a "type" themselves (which is not the same thing, conceptually). Any such proofs are likely to be uninteresting, so this concept is mostly relevant to languages with static type systems. Additionally, many languages that allegedly have a "static type system" must be regarded as dynamic in practice, in this context, because they permit inspection and conversion of types at run-time. In particular, this means any language with built-in, by-default support for "reflection" or such. C#, for instance. Haskell is unusual in how much information it expects a type to provide--in particular, functions cannot depend on any value other than the ones specified as its arguments. In a language with mutable global variables, on the other hand, any function can (potentially) inspect those values and change behavior accordingly. So a Haskell function with type A -> B can be regarded as a miniature program proving that A implies B ; an equivalent function in many other languages would only tell us that A and whatever global state is in scope combined imply B . Note that while Haskell does have support for things like reflection and dynamic types, use of such features must be indicated in the type signature of a function; likewise for use of global state. Neither is available by default. There are ways to break things in Haskell as well, e.g. by allowing runtime exceptions, or using non-standard primitive operations provided by the compiler, but those come with a strong expectation that they will only be used with full understanding in ways that won't damage the meaning of external code. In theory the same could be said of other languages, but in practice with most other languages it is both more difficult to accomplish things without "cheating", and less frowned-upon to "cheat". And of course in true "dynamic" languages the whole thing remains irrelevant. The concept can be taken much further than it is in Haskell, as well.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/81266", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/-1/" ] }
81,276
I usually read a lot about programming related stuff. When the topic I read is not related to my work directly ,(say reading about some Algorithms which I don't use in everyday work), I forget them. One way to reinforce what has been learnt is to write blogs. I am a beginner in writing blogs. When I started writing, I figured out that it is very tough. Even simply reproducing the content takes about two hours. Writing a well thought about blog often takes a whole day or sometimes a weekend. Is this normal? Any tips to write technical tutorials/technical blogs?
Writing is hard, good writing is even harder. The thing about trying to explain something is that it requires more knowledge then just "kinda know it". I find that by blogging I 1. find out related topics that I need to understand, 2. identify where my understanding is shallow Read through these slides http://www.ai.uga.edu/mc/WriteThinkLearn.pdf . They explain a lot about writing (and even compares it to programming)
{ "source": [ "https://softwareengineering.stackexchange.com/questions/81276", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/17887/" ] }
81,293
How can I encourage my coworkers to track the time they spend resolving issues and implementing features? We have software to do this, but they just don't enter the numbers. I want the team to get better at providing project estimates by comparing our past estimates to actual time spent. I suspect that my coworkers don't see the personal benefit, since they're not often involved in project scheduling.
I suspect that my coworkers don't see the personal benefit, since they're not often involved in project scheduling. That's fixable. Make them involved in scheduling.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/81293", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/17261/" ] }
81,355
Sony was recently hacked with a SQL injection and the passwords of their user's was stored in plain text. These are rookie mistakes. In such a large company, how does this pass QA? How do they not have better teams than to know better than this? The sheer size of the company that was hacked makes this different. It affects all of us because we all may one day find ourselves on a team that is responsible for something like this, and then we get the ax. So what are the factors that lead to this, and how do we prevent them?
First thing that comes to mind is, because they're big enough to grow a few layers of bureaucracy. This means, among other things, that you no longer have really smart coders in charge of the hiring process, which means they lose their ability to weed out potential programmers and QA people who are incompetent. Which leads to bad code getting written and making it into production, and we all know what happens next...
{ "source": [ "https://softwareengineering.stackexchange.com/questions/81355", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/14969/" ] }
81,370
You are in the process of looking for candidates for a software development position, all the resumes are reviewed, and you made a couple of interview invites. Now, the folks show up in the conference room onsite, and you begin the back-and-forth talking about past experience, reviewing the resume, the personal development interests, etc. In your experiences of hiring, what were the responses (and questions) that you wished you had processed better initially that may have stopped you from hiring a poor candidate? I am looking for some red flags to watch out for, and hoping to be discerning enough.
The only thing I know for sure is that there's a correlation between obfuscating, avoidant, yet overly confident answers and my desire to not hire the candidate. This is my personal "red flag". Some candidates don't fully answer questions in a satisfactory way and instead they will verbally dance around a psuedo-answer. Above all the goal of these candidates is to never say I don't know . They'll use buzzwords, but they'll also use other strategies to try to appear intelligent and knowledgable. They'll also refer too some project they were on in the past but can't describe that well exactly what it was or how it worked, but they'll emphasize how difficult it was. They'll appear to have a very confident demeanor despite an inability or lack of desire to dive into the technical details. They'll be really good at getting the managers excited about hiring them, but the devs have a hard time making heads or tails of them. They will never use the phrase "I don't know". They're good at not admitting they don't know something, so I can never say for sure they're 100% bad, but I never feel comfortable recommending someone unless I feel I learned something about that person and they're work. I usually have a very strong positive reaction or a grumbling "maybe!?!", and I've just learned not to recommend the "maybes".
{ "source": [ "https://softwareengineering.stackexchange.com/questions/81370", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/26899/" ] }
81,427
There are few academic papers addressing the relationship between lean software development and the practices covered by ISO 9001. Most articles says that the divergence between these approaches is big , but some also point that these concepts can be complementary and gains are much higher when using both approaches . Academically it is very beautiful, but in practice is it anyway? So here's the question: do you work or worked at companies applying both Agile as ISO 9001? What is your perception? What is really good and what is inappropriate?
ISO9001 gets a bit of rough press because most companies try to get auditted and fail first time, then make the mistake of increasing their process documentation. But that's never the point of ISO9001. I have, in a previous life, been an internal ISO9001 auditor. While I leave that off my CV nowadays, what they do is pretty simple: Ask someone what they're doing Ask them how they know that is what they should be doing This should lead to some documentation which should match This should be easy in Agile. You should have your processes documented on a wiki but they should be very simple and lightweight. That should be enough for an auditor. Related anecdote: Back in the day, I was with a company that was trying to get ISO9001. They paid for my accreditation. After several failed attempts, the way we did it was to rip up the 19 ring-bound folders of process documentation (I kid you not, it was 2 whole shelves, in which none of us could find anything when challenged) and bring it all down to one less-than-full folder of useful docs. ISO9001 doesn't insist on masses of processes, just that you have enough and that those you have are followed.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/81427", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/25059/" ] }
81,504
I use MS Visio for most of my design/architecting work when I need to be able to save the diagram somewhere and edit it later. I'm not the biggest fan of Visio, but it gets the job done (and it's free at work). I was wondering if there were any good alternatives to the fairly expensive Visio software, maybe something even better, that you guys have used in the past and were comfortable with. I'd certainly like to have that program in my toolbox!
I use yEd . It's freely available for all major platforms and has neat tools for automatic diagram layouts. a free of charge general-purpose diagramming program with a multi-document interface. It is a cross-platform application written in Java that runs on Windows, Linux, Mac OS, and other platforms that support the JVM. yEd can be used to draw many different types of diagrams, including flowcharts , network diagrams , UML diagrams , BPMN diagrams, mind maps , organization charts , and Entity Relationship diagrams . yEd also allows the use of custom vector and raster graphics as diagram elements. yEd loads and saves diagrams from/to GraphML , an XML-based format. The application can print diagrams including very large diagrams that span multiple pages...
{ "source": [ "https://softwareengineering.stackexchange.com/questions/81504", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/25883/" ] }
81,624
This is almost embarrassing ask...I have a degree in Computer Science (and a second one in progress). I've worked as a full-time .NET Developer for nearly five years. I generally seem competent at what I do. But I Don't Know How Computers Work! Please, bare with me for a second. A quick Google of 'How a Computer Works' will yield lots and lots of results, but I struggled to find one that really answered what I'm looking for. I realize this is a huge, huge question, so really, if you can just give me some keywords or some direction. I know there are components....the power supply, the motherboard, ram, CPU, etc...and I get the 'general idea' of what they do. But I really don't understand how you go from a line of code like Console.Readline() in .NET (or Java or C++) and have it actually do stuff. Sure, I'm vaguely aware of MSIL (in the case of .NET), and that some magic happens with the JIT compiler and it turns into native code (I think). I'm told Java is similar, and C++ cuts out the middle step. I've done some mainframe assembly, it was a few years back now. I remember there were some instructions and some CPU registers, and I wrote code....and then some magic happened....and my program would work (or crash). From what I understand, an 'Emulator' would simulate what happens when you call an instruction and it would update the CPU registers; but what makes those instructions work the way they do? Does this turn into an Electronics question and not a 'Computer' question? I'm guessing there isn't any practical reason for me to understand this, but I feel like I should be able to. (Yes, this is what happens when you spend a day with a small child. It takes them about 10 minutes and five iterations of asking 'Why?' for you to realize how much you don't know)
I will start from the lowest level that might be relevant (I can start from even lower level, but they are probably way too irrelevant), starting from Atom, to Electricity, to Transistors, to Logic Gates, to Integrated Circuits (Chip/CPU), and finishes at Assembly (I'd assume you are familiar with the higher levels). In the Beginning Atom Atom is a structure composed of electrons, protons, and neutron (which themselves are composed of elementary particles ). The most interesting part of the atom for computers and electronics are the electrons because electron are mobile (i.e. it can move around relatively easily, unlike protons and neutrons which are more difficult to move) and they can free-float by itself without being held inside an atom. Usually, each atoms has equal number of protons and electrons, we call this "neutral" state. As it happens, it is possible for an atom to lose or gain extra electrons. Atoms in this unbalanced state are said to be "positively charged" atom (more proton than electrons) and "negatively charged" atom (more electron than proton) respectively. Electrons are unconstructible and indestructible (not so in quantum mechanics, but that's irrelevant for our purpose); so if an atom loses an electron, some other atom nearby had to receive the extra electrons or the electron had to released into a free floating electron, conversely since electron is unconstructible, to gain extra electron, an atom had to sap it off nearby atoms or from a free floating electron. The mechanics of electrons is such that if there is a negatively-charged atom near a positively-charged atom, then some electrons will migrate until both atoms have the same charge. Electricity Electricity is just a flow of electrons from an area with very high number of negatively-charged atoms to an area with very high number of positively-charged atoms. Certain chemical reactions can create a situation where we have one nodes with lots of negatively-charged atoms (called "anode"), and another node with lots of positively-charged atoms (called "cathode"). If we connect two oppositely charged nodes with a wire, masses of electrons will flow from the anode to cathode, and this flow is what we call "electric current". Not all wires can transmit electrons equally easily, electrons flows much easily in "conducting" materials than in "resistant" materials. A "conducting" material have low electrical resistance (e.g. copper wires in cables) and a "resistant" material have high electrical resistance (e.g. rubber cable insulation). Some interesting materials are called semi-conductors (e.g. silicons), because they can alter their resistance easily, under certain conditions a semiconductor might act as a conductor and at other conditions it might turn into a resistor. Electricity always prefers to flow through the material with least resistance, so if a cathode and anode are connected with two wires, one having very high resistance and the other with very low resistance, the majority of electrons will flow through the low resistance cable and nearly none flows through the high resistance material. The Middle Age Switches and Transistors Switches/Flip-Flops are like your regular light switches, a switch can be placed between two pieces of wire to cut off and/or restore electricity flow. Transistors works exactly the same as a light switch, except that instead of physically connecting and disconnecting wires, a transistor connects/disconnects electricity flow by altering its resistance depending on whether there is electricity in the base node, and, as you might have already guessed/know, transistors are made from semiconductors because we can alter semiconductor to become either a resistor or a conductor to connect or disconnect electric currents. One common type of transistor, the NPN Bipolar Junction Transistor (BJT), has three nodes: "base", "collector", and "emitter". In an NPN BJT, electricity can flow from the "emitter" node to the "collector" node only when the "base" node is charged. When the base node is not charged, practically no electron can flow through and when the base node is charged, then electrons can flow between the emitter and the collector. The behavior of a transistor (I highly suggest you read through this before continuing, as it can explain better than me with interactive graphics) Let's say we have a transistor connected to an electric source at its base and collector, and then we wire up an Output cable near its collector (see Figure 3 in http://www.spsu.edu/cs/faculty/bbrown/web_lectures/transistors/ ). When we apply electricity to neither base nor collector, then no electricity can flow at all since there is no electricity to talk about: B C | E O 0 0 | 0 0 When we apply electricity to the collector but not the base, electricity cannot flow to the emitter since the base becomes a high resistance material, so the electricity escapes to the Output wire: B C | E O 0 1 | 0 1 When we apply electricity to the base but not the collector, also no electricity can flow since there is no charge difference between the collector and the emitter: B C | E O 1 0 | 0 0 When we apply electricity to both base and collector, we get electricity flowing through the transistor, but since the transistor now has lower resistance than the Output wire, nearly no electricity flows through the Output wire: B C | E O 1 1 | 1 O Logic Gates When we connect the emitter of one transistor (E1) to the collector of another transistor (C2) and then we connect an output near the base of the first transistor (O) (see Figure 4 in http://www.spsu.edu/cs/faculty/bbrown/web_lectures/transistors/ ), then something interesting happens. Let's also say we always apply electricity to the collector of the first transistor (C1) and so we only play around with the the base nodes of the transistors (B1,B2): B1 B2 C1 E1/C2 | E2 O ----------------------+---------- 0 0 1 0 | 0 1 0 1 1 0 | 0 1 1 0 1 0 | 0 1 1 1 1 1 | 1 0 Let's summarize the table so we only see B1, B2, and O: B1 B2 | O ---------+----- 0 0 | 1 0 1 | 1 1 0 | 1 1 1 | 0 Lo and behold , if you're familiar with Boolean Logic and/or Logic Gates, you should notice that this is precisely the NAND gate. And if you're familiar with Boolean Logic and/or Logic Gates you might also know that a NAND (as well as NOR) is functionally complete , i.e. using NAND only, you can construct all the other logic gates and the rest of the truth tables. In other word, you can design a whole computer chip using NAND gates alone. In fact, most CPUs are (or is it used to be?) designed using NAND only since it is cheaper to manufacture than using a combination of NAND, NOR, AND, OR, etc. Deriving the other boolean operators from NAND I would not describe how to make all boolean operators, only the NOT and the AND gate, you can find the rest somewhere else. Given a NAND operator, then we can construct a NOT gate: Given one input B O = NAND(B, B) Output O Given a NAND and NOT operator, then we can construct an AND gate: Given two inputs B1, B2 C = NAND(B1, B2) O = NOT(C) // or NAND(C,C) Output O We can construct other logic gates in a similar way. Since NAND gate is functionally complete , it is also possible to construct logic gates with more than 2 inputs and more than 1 output, I'm not going to discuss how to construct such logic gates here. Enlightenment Age Building a Turing Machine from Boolean Gates A CPU are just a more complicated version of a Turing Machine. The CPU registers are the Turing Machine's internal state, and the RAM is a Turing Machine's tape. A Turing Machine (CPU) can do three things: read a 0 or 1 from the tape (read a cell of memory from RAM) change its internal state (change its registers) move left or right (read multiple position from the RAM) write a 0 or 1 to the tape (write to a cell of memory to RAM) For our purpose, we're building Wolfram's 2-state 3-symbol Turing Machine using combinatorial logic (modern CPUs would use microcode, but they're more complex than is necessary for our purpose). The state table of the Wolfram's (2,3) Turing Machine is as follow: A B 0 P1,R,B P2,L,A 1 P2,L,A P2,R,B 2 P1,L,A P0,R,A We want to reencode the state table above as a truth table: Let I1,I2 be the input from the tape reader (0 = (0,0), 1 = (0,1), 2 = (1,0)) Let O1,O2 be the tape writer (symbol encoding same as I1,I2) Let M be connected to the machine's motor (0 = move left, 1 = move right) Let R be the machine's internal state (A = 0, B = 1) (R(t) is the machine's internal state at timestep t, R(t+1) at timestep t+1) (Note that we used two input and two outputs since this is a 3-symbol Turing machine.) R 0 1 I1,I2 (0,0) (0,1),1,1 (1,0),0,0 (0,1) (1,0),0,0 (1,0),1,1 (1,0) (0,1),0,0 (0,0),1,0 The truth table for the state table above: I1 I2 R(t) | O1 O2 M R(t+1) -------------+-------------------- 0 0 0 | 0 1 1 1 0 0 1 | 1 0 0 0 0 1 0 | 1 0 0 0 0 1 1 | 1 0 1 1 1 0 0 | 0 1 0 0 1 0 1 | 0 0 1 0 I'm not really going to construct such a logic gate (I'm not sure how to draw it in SE and it's probably going to be quite huge), but since we know that NAND gate is functionally complete , then we have a way to find a series of NAND gates that will implement this truth table. An important property of Turing Machine is that it is possible to emulate a stored-program computer using a Turing machine that only have a fixed states table. Therefore, any Universal Turing Machine can read its program from the Tape (RAM) instead of having to have its instruction hardcoded into the internal state table. In other word, our (2,3) Turing Machine can read its instructions from I1,I2 pins (as software) instead of being hardcoded in the logic gate implementation (as hardware). Microcodes Due to the increasing complexity of modern CPUs, it becomes prohibitively difficult to use combinatorial logic alone to design a whole CPU. Modern CPU is usually designed as an interpreter of microcodes instruction; a microcode is a small program embedded in the CPU that is used by the CPU to interpret the actual machine code. This microcode interpreter itself are generally designed using combinatorial logic. Register, Cache, and RAM We have forgotten something above. How do we remember something? How do we implement the tape and RAM? The answer is in an electronic component called Capacitor. A capacitor is like a rechargeable battery, if a capacitor is charged it will retain extra electrons and it can also return electrons to the circuitry. To write to a capacitor, we fill the capacitor with electron (write 1) or drain all the electrons in the capacitor until it's empty (write 0). To read the value of a capacitor, we try to discharge it. If, when we try to discharge, there is no electricity flowing, then the capacitor is empty (read 0), but if we detect electricity, then the capacitor must be charged (read 1). You might notice that reading a capacitor drains its electron store, modern RAMs have the circuitry to periodically recharge capacitor so they can retain their memory as long as there is electricity. There are multiple types of capacitors used in a CPU, the CPU registers and the higher level CPU caches are made using very high-speed "capacitors" that is actually built from transistors (therefore there is almost no "lag" to read/write from them), these are called static RAM (SRAM); while the main memory RAM is made using lower power, but slower and much cheaper capacitors, these are called Dynamic RAM (DRAM). Clock A very important component of a CPU is the clock. A clock is a component that "ticks" regularly to synchronize processing. A clock typically contains a quartz or other materials with well-known and relatively constant oscillation period, and the clock circuitry maintain and measures this oscillation to maintain its sense of time. CPU operations are done between clock ticks and read/writes are done in the ticks to ensure that all components move synchronously and not trample into each other while in intermediate states. In our (2,3) Turing Machine, between clock ticks electricity passes through the logic gates to calculate the output from the input (I1, I2, R(t)); and in the clock ticks, the tape writer will write O1,O2 to the tape, the motor will move depending on the value of M, and the internal register is written from the value of R(t+1), then the tape reader will read the current tape and put charge into I1,I2 and the internal register is reread back to R(t). Talking with Peripherals Note how the (2,3) Turing Machine interfaces with its motor. That is a very simplified view of how a CPU may interface with an arbitrary hardware. Arbitrary hardware can listen or write to a specific wire for inputs/outputs. In the case for the (2,3) Turing Machine, its interface with the motor is just a single wire that instructs the motor to turn clockwise or counterclockwise. What is left unsaid in this machine is that the Motor had to have another "clock" that runs in synchrony with the Machine's internal "clock" to know when to start and stop running, so this is an example of a synchronous data transmission . The other commonly used alternative, asynchronous transmission uses another wire, called the interrupt line, to communicate synchronization points between the CPU and the asynchronous device. Digital Age Machine code and Assembly Assembly language is a human readable mnemonic for machine codes. In the simplest case, there is a one-to-one mapping between assembly to machine code; although in modern assembly languages some instructions may map to multiple opcodes. Programming Language We all are familiar with this aren't we? Phew, finally finished, I typed all this in just 4 hours, so I'm sure there is a mistake somewhere (I'm primarily a programmer, not electric engineer nor physicists, so there might be several things that is blatantly wrong). Please if you found a mistake, don't hesitate to give a @yell or fix it yourself if you have the rep or create a complementary answer.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/81624", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/24493/" ] }
81,705
I have found a GPL library (no dual license), which does exactly what I need. Unfortunately, the GPL license on the library is incompatible with the license of a different library I use. I have therefore decided to rewrite the GPL library, so the license can be changed. My question is: How extensive do the changes need to be to the library in order to be able to change the license? In other words, what is the cheapest way to do this?
I'm not a lawyer, but AFAIK if you have seen the GPLed library code any emulation library you write would be tainted and may be declared a derived work by a judge if it is too similar in his appreciation. So the process would be to write a functional spec and have someone which hasn't seen the GPLed code write the library. Edit: Note that with the way you formulate your question "How extensive do the changes need to be to the library in order to be able to change the license?" the answer is AFAIK clear: whatever you do, if you just modify the library you must respects the term of the license which makes you able to modify it in the first place.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/81705", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/27048/" ] }
81,743
I will not bother you with details of my discussion so I will present it in the form of a short instance. A java guy has been following articles and publications of a famous programmer (a kind of Martin Fowler of my country). He says that he is sharing some secrets which other famous programmers don't share. I never believe that there are some secrets like wizards in the programming area. But some programmers who are not good yet in this area think that other famous programmers are success because they know some secrets that we don't. I totally disagree with this and I discussed it with someone and finally he said to me you are 2 years in this area and he (java guy) is 20 years professional programmer so he knows better than you. I wanted to be sure that I am not wrong. That's why I wanted to know this.
I would almost say it's the opposite.... I've worked with people who liked being tricky for whatever reason. Granted, they actually were pretty good programmers - when taken in a vacuum - but the code they produced was often quite obtuse and difficult to maintain by others. There is no point doing something clever that saves a few keystrokes, when two years later someone maintaining the code is going to waste a day when they get stumped by the trick. In fact, if I had to nominate one most important thing that I've learned in my ten years of commercial experience as a programmer - it's that maintainability is important . It reins supreme far above knowing some obscure hacks and tricks which might come in handy in rare situations, but which will almost certainly make the codebase more difficult to maintain in the long term. To be honest, I would go as far as to say that all coding should be done such that any new graduate with relatively basic core knowledge in the given language/platform should be able to pick it up and work with it. If it's so tricky and obscure that you need someone with 20 years experience in the language/platform who knows every little internal trick, then the project is in dire technical debt.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/81743", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/3664/" ] }
81,899
I am an individual developer working, largely, on web-projects (W/LAMP) and, at times, on C/C++ (non-GUI) projects of about average scale. I often struggle with structuring my source-code tree. In fact, usually, I don't complete a project without dumping the entire tree and rearranging the pieces three-four times which really takes up a lot of effort and moreover the end result does seem like a compromise. Sometimes, I end up with over classification of source - very long tree of folders and sub-folders. At other times, I simply end up concentrating all files in a particular folder based on the larger purpose they serve and thereby leading to 'chaotic' folders in the source. I would want to ask: Are there any principles/logic/best-practices that can help me better at structuring my source tree? Are there any graphical/diagrammatic techniques (for eg.: DFD in case of dataflow) that can help me visualize my source tree beforehand based on the analysis of the project? What strategy to adopt to structure multi-media files-tree associated with the project? About the bounty : I appreciate existing answers with the members sharing their own practices, however, I'd like to encourage more general and instructive answers (or resources) and more responses from the members.
I can't really give you much advice related to webprojects, but here's how I structure my tree in a programming project (mainly from a C/C++ perspective): / src — Source files written by myself ext — Contains third-party libraries libname-1.2.8 include — Headers lib — Compiled lib files Donwload.txt — Contains link to download the version used ide — I store project files in here vc10 — I arrange project files depending on the IDE bin — Compiled exe goes here build — The compiler's build files doc — Documentation of any kind README INSTALL COPYING A few notes: If I'm writing a library (and I'm using C/C++) I'm going to organize my source files first in two folders called "include" and "src" and then by module. If it's an application, then I'm going to organize them just by module (headers and sources will go in the same folder). Files and directories that I listed above in italics I won't add to the code repository.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/81899", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/22932/" ] }