source_id
int64 1
4.64M
| question
stringlengths 0
28.4k
| response
stringlengths 0
28.8k
| metadata
dict |
---|---|---|---|
9,200 | I'm interested in learning Scala, but due to is relative newness, I can't seem to find a whole lot of books about it. Is there a book out there that's the de-facto standard for describing best practices, design methodologies, and other helpful information on Scala? What makes that book special? | I can advise Programming in Scala because it's from the creator of the Scala language: Martin Odersky. He describes most of the features of Scala very detailedly and explains, why he chose this feature instead of an other one. Therefore, the reader gets a deep insight into Scala. The book is fantastic. It is one of the best programming books I've ever read. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/9200",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/18/"
]
} |
9,219 | I've heard it argued that design patterns are the best thing since sliced bread. I've also heard it argued that design patterns tend to exacerbate "Second System Syndrome," that they are massively overused, and that they make their users think they're better designers than they really are. I tend to fall closer to the former camp, but recently I've been seeing designs where nearly every single interaction is replaced with an observer relationship, and everything's a singleton. So, considering the benefits and problems, are design patterns generally good or bad, and why? | Design patterns are a language , not advice to write program or a contract. Their primary use is an a posteriori explanation how a component or a system was (or is going to be) implemented. Instead of going into too much details, you can just say a couple of words that can describe the implementation well enough for the listener to understand how it works and what was important in it. Alex: Hey, how are the config files created? Bob: They're generated by a factory, which resides in config.h . Now Alex knows that creation of config files involves non-trivial preparations, because otherwise their creation wouldn't be enclosed into a factory. However, if Bob was a pattern-headed phony, and just used patterns here and there, Alex couldn't tell anything about config creation, because Bob used factory just everywhere. This would also lead to excessive complexity in the program. So, program first, then spot patterns in your code, not vice versa. That's how they're effectively used. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/9219",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/6/"
]
} |
9,320 | Lisp obviously is an advantage for the AI stuff, but it doesn't appear to me that Lisp is any faster than Java, C#, or even C. I am not a master of Lisp, but I find it incredibly difficult to understand the advantage one would get in writing business software in Lisp. Yet it is considered as a hacker's language. Why does Paul Graham advocate Lisp? Why did ITA Software choose Lisp over other high-level languages? What value does it have over these languages? | There are a few reasons I am working on becoming competent with Common Lisp. Homoiconic code. This allows structured self-modifying code. Syntax-aware macros. They allow rewriting of boilerplate code. Pragmatism. Common Lisp is designed to get stuff done by working professionals. Most functional languages aren't, as a rule. Flexibility. It can do a lot of different things, all at reasonable speeds. Wartiness. The real world is messy . Pragmatic coding winds up having to either use or invent messy constructs. Common Lisp has sufficient wartiness that it can get stuff done. Arguably the only real reasons to choose against Common Lisp is that the standard libraries are dated. I will go out on a limb and say that in the general case, syntax should not be an issue to a professional software worker. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/9320",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/672/"
]
} |
9,481 | Why does Software Engineering not have union representation like other professional occupations, such as teaching? Are there any unions for software developers that exist and are successful? | Unions are useful when one person can pretty much do the same job as anybody else with little or no training. By allowing employees to negotiate as a whole, you don't run the risk of employers simply finding the person who'll work the cheapest and driving wages down. (At least, that's the theory.) For professional fields, when employees require particular skills and you simply can't replace one engineer with someone else with no "penalty". As an engineer, you have much more power to negotiate wages and working conditions on your own , based on your own skills and knowledge. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/9481",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/4459/"
]
} |
9,498 | I'm looking at evaluating ORMs. I've used SubSonic , Linq-to-SQL and Entity Framework . I've got a team of developers ranging from juniors to seniors. What are the criterias for evaluating an ORM for.NET? | It's a loaded question. There are lots of very good ORMs approaching the subject with different philosophies. None are perfect through and all tend to become complex as soon as you stray from their golden path (and sometimes even when you stick to it). What you should ask yourself when selecting an ORM: What does it need to do for you? If you already have a set of requirements for your application, then you should select the ORM that better matches these rather than an hypothetical 'best'. Is your data shared or just local? A lot of the hairiness in ORM is caused by how they handle concurrency and changes to the data in the database when multiple users are holding a versions of the same data. If your datastore is for a single-user, then most ORMs will do a good job. However, ask yourself some hard questions in a multi-user scenario: how is locking handled? What happens when I delete an object? How does it affects other related objects? Is the ORM working close to the metal of the backend or is it caching a lot of data (improving performance at the expense of increasing the risk of staleness). Is the ORM well adapted for your type of application? A particular ORM may be hard to work with (lots of performance overhead, hard to code) if it's a used in a service or sitting inside a web app. It may on the contrary be great for desktop apps. Do you have to give up database-specific enhancements? ORMs tend to use the lowest-common denominator set of SQL to ensure they work with lots of different database backend. All ORMs will compromise on available features (unless they specifically target a single backend) but some will allow you to implement additional behaviours to exploit specific enhancements available in your chosen backend. A typical db-specific enhancement is Full-Text search capabilities for instance; make sure your ORM provides you with a way to access these features if you need them. How does the ORM manages changes in the data model? Some can update the DB automatically within a certain measure, other don't do anything and you'll have to do the dirty work yourself; other provide a framework for handling change that lets you control database updates. Do your mind to couple your application to the ORM's objects or do you prefer to handle POCOs and user an adapter for persistence? The former is usually simple to handle but create dependencies on your ORM-specific data objects everywhere, the latter is more flexible, at the cost of a bit more code. Will you ever need to transfer your objects remotely? Not all ORMs are equal when it comes to fetching objects from a remote server, look closely at what is possible or impossible to do. Some are efficient, others not. Is there someone you can turn to for help? Is there good commercial support? How big and active is the community around the project? What are the issues existing users are having with the product? Do they get quick solutions? A few ORMs that I looked at: XPO From developer Express: is small and simple, code-centric. They use it for their application framework eXpressApp . NHibernate Is free, but the learning curve is rather steep. Lots of goodies but it's hard to find what is really relevant sometimes in all the fragmented documentation. LLBLGen Pro very mature project, not the simplest but a lot of thought has been put into it. Entity Framework GEtting there. The last releases are pretty good and MS is listening, although it's still a bit young compared to other more mature ORMs. DataObject.Net Looks promising but is also a bit new to risk an important project on it IMHO. Quite active though. There are many others of course. You can have a look at the controversial site ORM Battle that lists some performance benchmarks, although you have to be aware that raw speed is not necessarily the most important factor for your project and that the producers of the website is DataObject.Net. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/9498",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/4219/"
]
} |
9,521 | Imagine you were elected coroner of IEEE or somesuch governing body and you had to pronounce a programming language as dead. What signs would you look for? Are there any zombie languages out there that don't know they're already dead? | Computer languages never die; they only turn from overhyped to underused. Someone will always re-discover an old language and learn it, just for the fun of it. Addendum: Those people who like older languages sometimes write a new language inspired by it. So even if the original language is dead by some people's terms, its spirit continues to live on in its descendants. Some examples include: B and BCPL inspired C SNOBOL inspired Icon Algol inspired too many languages to count | {
"source": [
"https://softwareengineering.stackexchange.com/questions/9521",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/1973/"
]
} |
9,576 | One thing that I've heard a lot over the years is that those working in the IT world generally don't make life time careers out of it, but tend to "burn out" and start a new career doing something else unrelated (e.g. going from software development to being an accountant). Have you found this to be generally true in your experience and if so, what is the general impression on how long people work as developers before starting a new career? | I've been in software development all my working life from junior developer, through senior developer to team lead/manager and now back developing (though hoping to get back into management sooner rather than later). My working life is now nearly 40 years and in that time I've changed domains and technologies as the companies I've worked for have changed. I've then used that new experience to find new positions when I've had to, which has in turn led to other new domains and technologies. All that time I've known developers as old or older than me. I think "burn out" happens if you try to do too much - working 12+ hour days and/or weekends for extended periods and happens in any industry not just computing. I know that if I had to do that I'd be looking for something less stressful to do. If you find a working style that fits your temperament then there's no reason why you can't continue working until you retire at 65 (or when ever). | {
"source": [
"https://softwareengineering.stackexchange.com/questions/9576",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/2471/"
]
} |
9,598 | Do you think that only the programming pratice will help you to improve your logical programming skill or do you train your brain with puzzle games, trying imagine how universe works, playing instruments and so on? Devoting more time with programming, will do you get logical programming skills more fast? | I think full-time programming practices my logical skills quite enough, and they need rest after work. Doing something else such as practicing motoric skills by playing musical instruments is good to the brain. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/9598",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/58922/"
]
} |
9,605 | There's a quotation by Alan J. Perlis that says: There are two ways to write error-free programs; only the third one works. I recently heard this quote from my friend, and was unable to understand the deeper meaning behind it. What is Perlis talking about here? | There is no third way. There is no way to write error-free programs | {
"source": [
"https://softwareengineering.stackexchange.com/questions/9605",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/4522/"
]
} |
9,730 | I've heard a lot of talk about using functional languages such as Haskell as of late. What are some of the big differences, pros and cons of functional programming vs. object-oriented programming? | I would say that it is more Functional Programming vs Imperative Programming . The biggest difference is that Imperative programming is about Control flow while Functional programming is about Data flow . Another way to say it is that functional programming only uses expressions while in imperative programming both expressions and statements are used. For example, in imperative programming variables and loops are common when handling state, while in functional programming the state is handled via parameter passing, which avoids side-effects and assignments. Imperative pseudo-code for a function for calculate the sum of a list (the sum is kept in a variable): int sumList(List<int> list) {
int sum = 0;
for(int n = 0; n < list.size(); n++) {
sum = sum + list.get(n);
}
return sum;
} Functional pseudo-code for the same function (the sum is passed as a parameter): fun sumList([], sum) = sum
| sumList(v::lst, sum) = sumList(lst, v+sum) I recommend the presentation Taming Effects with Functional Programming by Simon Peyton-Jones for a good introduction to functional concepts. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/9730",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/190/"
]
} |
9,813 | I find myself lagging behind on new skills, techniques, language features, etc, and I am finding the time to do so is lacking. Between work, professional, personal and family obligations, I'm lucky to find a few stray hours here and there to focus on any new technologies or reading that I want or need to do. I do make it out to relevant local user groups, but even those are sometimes hard to get to? So I thought I would ask the community, where, or when, do you find the time to focus on honing your skills or learning new ones? Do you schedule time? Forgo some sleep during the week or weekend? Insomnia? Something else? EDIT I want to thank all of you who took the time to answer and offer up your advice. There were some things I knew or had thought about, and others I hadn't considered as an option until you shed new light on it. EDIT 2: In trying to find which answer among the many great ones to accept, I went with @Paddyslacker's since it is the one I feel is best suited for my current situation, although everyone had some very good nuggets of wisdom, such as @Elisha @Martin or @dash-tom-bang. | Let me start by saying I know where you are coming from. I work for a small company with lots of stuff to do and I am a family man with two kids under the age of five. I have no intention of being an absent father or husband, or doing a poor job for my employers, so it is extremely difficult to find the time for new stuff. I think the trick is to make your goals extremely small and achievable and go from there. Earlier in the year I wanted to write some blog posts. Instead of setting a generic "write some blog posts" goal, I made a micro target of "write for two hours a week; don't worry about whether it's good enough to post." I didn't worry about the larger goal, but instead made sure I achieved the micro goal that ultimately meant I achieved what I wanted. Right now, I'm interested in BDD frameworks, but rather than have a goal of learning them, my goal is simply to spend 30 minutes a day on them. Even if the 30 minutes is simply downloading and installing cucumber, I'm okay with that. Ultimately, I will achieve my goal. Someone much wiser than me once said that the only way to eat an elephant is one bite at a time; similarly the only way to hone your skills is in small chunks of time. Rather than stressing about some larger goal, if you focus on freeing up just two hours a week, over the next year that's 100 hours you've dedicated to something new. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/9813",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/146/"
]
} |
9,881 | I'm looking for informative class documentation format for my Entity, Business Logic and Data Access classes. I found following two formats from here Format 1 ///-----------------------------------------------------------------
/// Namespace: <Class Namespace>
/// Class: <Class Name>
/// Description: <Description>
/// Author: <Author> Date: <DateTime>
/// Notes: <Notes>
/// Revision History:
/// Name: Date: Description:
///----------------------------------------------------------------- Format 2 // ===============================
// AUTHOR :
// CREATE DATE :
// PURPOSE :
// SPECIAL NOTES:
// ===============================
// Change History:
//
//================================== I feel following are the basic elements Author Created Date Description Revision History as Namespace and Class name will be there in anyway. Please let me know your thoughts, which format is recommended and whether there is a standard way of writing revision history? | Most of the information you've suggested there would be found in the source repository. The only thing you really need is the purpose section, which says what the class is there for. Would it be tedious to look in repository every time you want to know the other information? I'd say no. How often do you care who the original author was? Or when the file was first created? Plugins (such as Ankh SVN for Visual Studio) often allow you to right click within your current file and view the repoistory log for the file, so it's not that much of a hassle to actually see this information. Additionally, if you store the version history in a comment, this comment needs to be maintained. So over time there's a chance it could be lying to you. The source code repository automatically keeps this historical data, so doesn't need that maintenance, and will be accurate. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/9881",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/493/"
]
} |
9,885 | How much technical (for future developers) documentation is enough ? Is there a ratio between hours coding and hours documenting that's appropriate? Papadimoulis argues that you should produce the least amount of documentation needed to facilitate the most understanding, Is that a good guideline, or are there specific things I should be creating? | How about some hallway usability testing? Show the code and documentation to a developer unfamiliar with the project. When you can do that without an overwhelming urge to explain something while watching them review the code, you have enough. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/9885",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/4422/"
]
} |
9,948 | A question with absolutely no relevance to myself: How would a programmer with a stellar SO reputation (like 30k+) market his or herself to poential employers/ investors who have never heard of SO?
In other words, how can one describe SO in a few sentences that will make a high reputation sound impressive at an interview? | In an interview, you wait for the right question. Something about "how do you keep current with technology today", or possibly "would you describe yourself as active in the developer community" (a question you are much more likely to be asked if it says "active in the developer community" on your resume) and then you say something about SO and how it's "a question and answer site where other users award you reputation points for good answers and good questions" and then give your score in really round numbers and then translate into English like "which puts me in the top 1% of users on the site. I'm happy to be recognized as helpful in the technologies I use a lot." On your resume, you could simply include your SO handle in the contact section, along with your Twitter handle and link to your blog, assuming they're technically relevant. People who recognize it will go check your rep. People who've never heard of it won't learn anything from a simple sentence on the resume. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/9948",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/1444/"
]
} |
9,960 | When you have no clue about the question, how do you answer/act when you do not know the answer at all? Telling the truth is pretty obvious. But how could you try to transform this weakness into a strength? | "I don't know how to do that, but if I ran into that problem in a project, here's how I'd go about figuring out how to make it work..." | {
"source": [
"https://softwareengineering.stackexchange.com/questions/9960",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/389/"
]
} |
9,965 | I don't see any use for case sensitivity in a programming language, apart from obfuscating code. Why implement this in a programming language? Update: It looks like someone you know made a statement on this . | While case folding is fairly trivial in English, it's much less so in some other languages.
If a German programmer uses ß in a variable name, what are you going to consider the upper-case equivalent? Just FYI, "ß" is only ever used in lower case. OTOH, "ss" is equivalent -- would you consider a compiler obliged to match them? When you get into Unicode, you get even more interesting problems, such as characters with pre-composed diacritical marks versus separate combining diacriticals. Then you get to some Arabic scripts, with three separate forms of many letters, rather than just two. In the dark ages most programming languages were case-insensitive almost out of necessity. For example, Pascal started out on Control Data mainframes, which used only six bits per character (64 codes, total). Most such machines used the "CDC Scientific" character set, which only contained upper-case characters. You could switch to other character sets, but most had either upper-case or lower-case, but not both -- but used the same codes for both. The same was true of the ancient Baudot codes and such considered standard in the beginning days of COBOL, FORTRAN, BASIC, etc. By the time more capable hardware was widely available, their being case-insensitive was so thoroughly ingrained that changing it was impossible. Over time, the real difficulty of case-insensitivity has become more apparent, and language designers have mostly decided ("realized" would probably be a more accurate term) that when/if people really want case insensitivity, that it's better handled by ancillary tools than in the language itself. At least IMO, compiler should take input exactly as presented, not decide that "you wrote this, but I'm going to assume you really meant something else." If you want translations to happen, you're better off doing them separately, with tools built to handle that well. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/9965",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/19/"
]
} |
10,032 | Do there exist studies done on the effectiveness of statically vs dynamically typed languages? In particular: Measurements of programmer productivity Defect Rate Also including the effects of whether or not unit testing is employed. I've seen lots of discussion of the merits of either side but I'm wondering whether anyone has done a study on it. | Some suggested reading: Developers Shift to Dynamic Languages ( PDF ) On the Revival of Dynamic Languages ( PDF ) Static typing where possible, dynamic typing when needed: The end of the cold war between programming languages ( PDF ) The Security of Static Typing with Dynamic Linking ( PDF ) Combining Static and Dynamic Reasoning for Bug Detection ( PDF ) Dynamic Typing in a Statically Typed Language ( PDF ) Turning Dynamic Typing into Static Typing by Program Specialization ( PDF ) Hybrid Type Checking ( PDF ) Not exactly on static typing, but related: Securing web application code by static analysis and runtime protection ( PDF ) Some interesting articles or essays on the subject or on static analysis of programs in general: Pluggable Type Systems ( PDF ) Strong Typing vs Strong Testing Linux Kernel Developer Responses to Static Analysis Bug Reports ( PDF ) Is Weak Typing Strong Enough? Correlation Exploitation in Error Ranking Improving Software Quality w/ Static Analysis And for the ones who would be wondering what this is all about: Introduction to Static and Dynamic Typing However, I doubt any of these with give you a direct answer, as they don't do exactly the study you're looking for. They will be interesting reads though. Personally , I firmly consider that static typing over dynamic typing facilitates bug detection. I spend way too much type looking for typos and minor mistakes like these into JavaScript or even Ruby code. And when it comes to the view that Dynamic Typing gives you a boost in productivity, I think that mostly comes down to tooling. If statically typed languages have the right tools to allow for background recompilation and provide an REPL interface, then you get the benefits of both worlds. Scala provides this for instance, which makes it very easy to learn and prototype away in the interactive console, but gives you the benefits of static typing (and of a stronger type system than a lot of other languages, ML-languages aside). Similarly, I don't think I have a loss of productivity by using Java or C++ (because of the static typing), as long as I use an IDE that helps me along. When I revert to coding only with simple configurations (editor + compiler/interpreter), then it feels more cumbersome and dynamic languages seem easier to use. But you still hunt for bugs. I guess people would say that the tooling issue is a reversible argument, as if tooling were better for dynamic languages, then most bugs and typos would be pointed out at coding-time, but that reflects the flaw in the system in my opinion. Still, I usually prototype in JRuby and will code in Java later most of the things I do. WARNING: Some of these links are unreliable, and some go through portals of various computing societies using fee-based accesses for members. Sorry about that, I tried to find multiple links for each of these but it's not as good as I'd like it to be. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/10032",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/1343/"
]
} |
10,166 | 20% time it's the culture of an employer allowing it's employees to spend 20% of their time working on projects that they find interesting - it may be inventing a new app, or improving an existing process, etc. Some people may know this as skunk work, however that term may not mean anything (or something entirely different) to you. There are many documented cases of great products being born out of 20%/skunk work at a company. It seems like a win/win situation; the company potentially gain a great new product or application, and the developer has the opportunity to flex his/her creative muscles and innovate. I tried on numerous occasions to introduce some form of 20%/Skunk working at my previous employer with no success. How can I better justify it to management? What's the "right" way to approach this kind of work arrangement? | The main reason for 20% time is to keep capacity utilization at 80% rather than at 100%. You can think of a software development organization as a system that turns feature requests into developed features. You can model its behaviour using queueing theory . THEORY If requests arrive faster than the system can service them, they queue up. When arrivals are slower, the queue size decreases. Because the arrival and service processes are random, the queue size changes randomly with time. The mathematically inclined can ask about this "randomness": there must be some probability distribution, so what will the queue size be on average? Math (queueing theory) has an answer to that: if both arrival and service processes are Markov, then N = rho^2 / (1-rho). (Where rho is the utilization coefficient equal to the ratio of service and arrival rates. If the processes are non-Markov, the math is more complicated, but doesn't change the conclusions.) If you plot this function, you can see that the average queue length remains low while utilization is up to 0.8 , then rises sharply and goes to infinity. You can understand this intuitively by thinking about your computer's CPU: when its utilization approaches 100%, the computer becomes unresponsive. PRACTICE The economics of software development is such that software companies incur big costs when their queues are in high-queue states. This includes missed market opportunities, obsolete products, late projects, and waste caused by building features in anticipation of demand. The 20% time is thus the scientific answer to the problem of optimizing economic outcomes: avoid high-queue states by avoiding utilization ratios causing them. It is essentially the slack that keeps the system responsive. Several practical conclusions follow immediately: if you're considering 20% time and doing cost accounting (developers' time costs X, but/and the company can/cannot afford it), you're doing it wrong. if you're allocating 20% to a Friday every week, you're doing it wrong if you're setting up a 20% time project proposal submission/review/approval system, you're doing it wrong if you're filling out timesheets, you're doing it wrong if you're using innovation as a motivator for 20% time, you're doing it wrong. While new products have come out of 20% projects, they were not the point. If your company cannot innovate during its core hours, that's a problem! 20% time is not about creativity. Don't say you'll unleash your creativity with 20% time, ask why you're not creative enough already during your core hours. ANSWERS TO QUESTIONS IN THE COMMENTS Dan , you got that right and accurately described the mistake made by many. You cannot choose your utilization percentage, because it's an output variable. It is a ratio of characteristics of two processes, so it is what it is because the processes are the way they are. An organization does have influence over both processes; matching capability and demand is one of the hard problems addressed by the lean software development body of knowledge. Utilization is one of the indicators how well this problem solved in an organization. Slack emerges as your lean initiative progresses and you remove waste from the value stream. But if you mandate 20% time, you end up in the same utilization trap with less available capacity. Kim , it is still partially a culture thing. The closest cultural reference I can think of is the synergistic level of the so-called Marshall model of organizational change. It emerges at the end of successful lean transformations or is present in organizations built lean from the start. ( Here's a link to Bob Marshall's white paper (PDF) .) REFERENCES The above logic is well supported in the software engineering literature. Mary and Tom Poppendieck hinted at it in their 2006 book Implementing Lean Software Development . Donald Reinertsen in his 2009 book Principles of Product Development Flow (Chapter 3) gives thourough treatment of this subject, with formulas and graphs. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/10166",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/3890/"
]
} |
10,208 | As someone who is now finding himself on the other end of the interview table, I'm wondering how useful these questions are from an employer's perspective. Some of my coworkers think they're good because you can see "how they respond," but I'm not convinced it tells you anything useful, for several reasons: It's not a very comfortable question and can lead people to twist their answers, even if not on purpose People may not fully know their greatest strengths or weaknesses (i.e. judge them by their peers) Explaining what a strength is isn't as good as showing it I still don't know any more about the candidate afterwards The rationale of my coworkers is that it can help weed out people that give ridiculous responses, like one guy that said his greatest strength was "his intellect" or people that try to turn the weakness question into a strength like "I work too hard." But I think there's more effective ways to determine such things. If you want to see if someone's smart, ask them technical questions. If you want to see if someone is productive, look at their work history. If you want to see how someone reacts under stress or change, ask them how they've dealt with it and ask for concrete examples. What are people's thoughts on these questions, from the perspective of an interviewer? What do they really tell you about a candidate, and what are better alternatives? How do I convince my colleagues of this? | Not very. Any question for which a good percentage of candidates will have a canned response is of limited value, since you're often not getting the real them. Everyone and their cousin has heard of "what is your greatest weakness." The answer encourages lying: honest people will describe a fault and end up looking bad, while less honest people will spin a strength as a fault and look good. The question is so dated that, in my opinion, it reflects poorly on your company to ask it. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/10208",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/1360/"
]
} |
10,340 | Is it still worth it to protect our software against piracy? Are there reasonably effective ways to prevent or at least make piracy difficult? | Not really. Any copy protection has to be 100% perfect, (which we all know is impossible,) or else all it will take is for one person anywhere in the world to come up with a working crack and post it on the Web. If you want people to pay money for your product, copy protection is not the answer. It never has worked and never will. The answer lies in Economics 101: people will pay money for your product if they perceive its value to them as being greater than the price you are asking for it. Otherwise, they won't. Period. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/10340",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/4809/"
]
} |
10,672 | Ever since my very first programming class in high school, I've been hearing that string operations are slower — i.e. more costly — than the mythical "average operation." Why makes them so slow? (This question left intentionally broad.) | "The average operation" takes place on primitives. But even in languages where strings are treated as primitives, they're still arrays under the hood, and doing anything involving the whole string takes O(N) time, where N is the length of the string. For example, adding two numbers generally takes 2-4 ASM instructions. Concatenating ("adding") two strings requires a new memory allocation and either one or two string copies, involving the entire string. Certain language factors can make it worse. In C, for example, a string is simply a pointer to a null-terminated array of characters. This means that you don't know how long it is, so there's no way to optimize a string-copying loop with fast move operations; you need to copy one character at a time so you can test each byte for the null terminator. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/10672",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/94/"
]
} |
10,675 | What do you think? What is the ideal programming language learning sequence which will cover most of the heavily used languages and paradigms today as well as help to grasp common programming basics, ideas and practices?
You can even suggest learning sequence for paradigms rather than languages. N.B. : This is port of the question I asked in stackoverflow and was closed for being subjective and argumentative. | Python, Lisp, C, Haskell Assuming the question was about an ideal learning sequence for newcomers to programming (since old hands at programming will have had their own (likely accidental) learning sequence), I'd suggest reading Norvig's essay on how to learn programming in 10 years , then: Python : Start with a dynamic, high-level, OO & functional language. Because it's really important for beginners to feel productive ASAP , and not be turned off by alien syntax, lack of libraries, lack of multi-platform, lack of learning resources, and lack of community. Python is highly readable, has tons of good libraries (esp. scientific libraries - a modern scientist/engineer must know how to program), is easily run from most OSes, has tons of tutorials and entire free books, and is generally user-friendly — all while still being powerful enough to grow with you as you become an advanced programmer working on large problems. It's also important to reinforce important+useful conventions for a beginner: code readability counts a LOT, and you should be writing code for yourself and others to readily understand. Lisp: In particular, at least skim The Structure and Interpretation of Computer Programs or watch the SICP videos , and have one's eyes opened very wide by seeing the foundations and expressive power of programming - one can do so much with so little. Learn how Lisp can express not just the functional paradigm, but OO, logical/declarative, and more - like the creation of new domain-specific languages. Read Norvig's PAIP to witness what a master can do with the language. Then check out Clojure , a modern Lisp dialect that could be one of the Next Big Things. C: Only because it's the lingua-franca of computing. :) Possibly optional these days if one is primarily a programmer in a particular non-software domain. I find it ugly but worth knowing about to get an appreciation for the underlying hardware. Go with K&R , of course. Haskell : Pure functional power. Where current Com.Sci. theory and practical expressive power meet. See Real World Haskell . After the above tour, one would be very adept at tackling problems and expressing solutions in code, and be totally comfortable with the paradigms here : | {
"source": [
"https://softwareengineering.stackexchange.com/questions/10675",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/963/"
]
} |
10,736 | I am currently planning to develop a J2EE website and wish to bring in 1 developer and 1 web designer to assist me. The project is a financial app within a niche market. I plan to keep the source closed. However, I fear that my would-be employees could easily copy the codebase and use it or sell it to a third party. The app development will take 4-6 months, perhaps more, and I may bring in additional employees after the app goes live. But how do I keep the source to myself. Are there techniques companies use to guard their source? I foresee disabling USB drives and DVD writers on my development machines, but uploading data or attaching the code in email would still be possible. My question is incomplete. But programmers who have been in my situation, please advice. How should I go about this? Building a team, maintaining code-secrecy,etc. I am looking forward to sign a secrecy contract with the employees if needed too.
(Please add relevant tags) Update Thank you for all the answers. I certainly won't be disabling all USB ports and DVD writers now. But I think I should be logging activity(How exactly should I do that?) I am wary of scalpers who would join and then run off with the existing code. I haven't met any, but I have been advised to be wary of them. I would include a secrecy clause, but given this is a startup with almost no funding and in a highly competitive business niche with bigger players in the field, I doubt I would be able to detect or pursue any scalpers. How do I hire people I trust, when I don't know them personally. Their resume will be helpful but otherwise trust will develop only with time. But finally even if they do run away with the code, it is service that matters after the sale is made. So I am not really worried for the long term. | You need to trust your developers. Virtually all professional developers won't steal your source. It's understood that if you work for somebody else, it's the employer that owns the code that you write. Developers might copy code for reference purposes, but it's highly unlikely they will offer it for sale to anyone else. If they did offer it for sale to a new employer then the likely outcome is them being shown the door and possibly even arrested (as Bob Murphy points out in his comment ). Getting caught isn't worth the risk. More importantly, distrust breeds distrust. Disabling USB ports and DVD writers will engender a feeling of distrust which will, paradoxically, make it more likely that the developers will copy the code. By all means add a secrecy clause to your contract, but it's probably unnecessary to highlight it as the most important part of the contract. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/10736",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/4429/"
]
} |
10,793 | I've heard in several places "Don't make large commits" but I've never actually understood whats a "large" commit. Is it large if you work on a bunch of files even if there related? How many parts of a project should you be working on at once? To me, I have trouble trying to make "small commits" since I forget or create something that creates something else that creates something else. You then end up with stuff like this: Made custom outgoing queue
Bot
-New field msgQueue which is nothing more than a SingleThreadExecutor
-sendMsg blocks until message is sent, and adds wait between when messages get
sent
-adminExist calls updated (see controller)
-Removed calles to sendMessage
Controller
-New field msgWait denotes time to wait between messages
-Starting of service plugins moved to reloadPlugins
-adminExists moved from Server because of Global admins. Checks at the channel,
server, and global level
Admin
-New methods getServer and getChannel that get the appropiate object Admin
belongs to
BotEvent
-toString() also show's extra and extra1
Channel
-channel field renamed to name
-Fixed typo in channel(int)
Server
-Moved adminExists to Controller
PluginExecutor
-Minor testing added, will be removed later
JS Plugins
-Updated to framework changes
-Replaced InstanceTracker.getController() with Controller.instance
-VLC talk now in own file
Various NB project updates and changes
---
Affected files
Modify /trunk/Quackbot-Core/dist/Quackbot-Core.jar
Modify /trunk/Quackbot-Core/dist/README.TXT
Modify /trunk/Quackbot-Core/nbproject/private/private.properties
Modify /trunk/Quackbot-Core/nbproject/private/private.xml
Modify /trunk/Quackbot-Core/src/Quackbot/Bot.java
Modify /trunk/Quackbot-Core/src/Quackbot/Controller.java
Modify /trunk/Quackbot-Core/src/Quackbot/PluginExecutor.java
Modify /trunk/Quackbot-Core/src/Quackbot/info/Admin.java
Modify /trunk/Quackbot-Core/src/Quackbot/info/BotEvent.java
Modify /trunk/Quackbot-Core/src/Quackbot/info/Channel.java
Modify /trunk/Quackbot-Core/src/Quackbot/info/Server.java
Modify /trunk/Quackbot-GUI/dist/Quackbot-GUI.jar
Modify /trunk/Quackbot-GUI/dist/README.TXT
Modify /trunk/Quackbot-GUI/dist/lib/Quackbot-Core.jar
Modify /trunk/Quackbot-GUI/nbproject/private/private.properties
Modify /trunk/Quackbot-GUI/nbproject/private/private.xml
Modify /trunk/Quackbot-GUI/src/Quackbot/GUI.java
Modify /trunk/Quackbot-GUI/src/Quackbot/log/ControlAppender.java
Delete /trunk/Quackbot-GUI/src/Quackbot/log/WriteOutput.java
Modify /trunk/Quackbot-Impl/dist/Quackbot-Impl.jar
Modify /trunk/Quackbot-Impl/dist/README.TXT
Modify /trunk/Quackbot-Impl/dist/lib/Quackbot-Core.jar
Modify /trunk/Quackbot-Impl/dist/lib/Quackbot-GUI.jar
Modify /trunk/Quackbot-Impl/dist/lib/Quackbot-Plugins.jar
Modify /trunk/Quackbot-Impl/lib/javarebel.stats
Add /trunk/Quackbot-Impl/lib/jrebel.info
Modify /trunk/Quackbot-Impl/nbproject/private/private.properties
Modify /trunk/Quackbot-Impl/nbproject/private/private.xml
Modify /trunk/Quackbot-Impl/nbproject/project.properties
Modify /trunk/Quackbot-Impl/plugins/CMDs/Admin/reload.js
Add /trunk/Quackbot-Impl/plugins/CMDs/Operator/hostBan
Modify /trunk/Quackbot-Impl/plugins/CMDs/Operator/mute.js
Modify /trunk/Quackbot-Impl/plugins/CMDs/lyokofreak/curPlaying.js
Modify /trunk/Quackbot-Impl/plugins/CMDs/lyokofreak/lfautomode.js
Modify /trunk/Quackbot-Impl/plugins/listeners/onJoin.js
Modify /trunk/Quackbot-Impl/plugins/listeners/onQuit.js
Modify /trunk/Quackbot-Impl/plugins/testCase.js
Add /trunk/Quackbot-Impl/plugins/utils/whatsPlaying.js
Modify /trunk/Quackbot-Impl/src/Quackbot/impl/SandBox.java
Add /trunk/Quackbot-Impl/vlc_http
Add /trunk/Quackbot-Impl/vlc_http/current.html
Modify /trunk/Quackbot-Plugins/dist/Quackbot-Plugins.jar
Modify /trunk/Quackbot-Plugins/dist/README.TXT
Modify /trunk/Quackbot-Plugins/dist/lib/Quackbot-Core.jar
Modify /trunk/Quackbot-Plugins/nbproject/private/private.properties
Modify /trunk/Quackbot-Plugins/nbproject/private/private.xml
Modify /trunk/Quackbot-Plugins/src/Quackbot/plugins/JSPlugin.java
Add /trunk/Quackbot-Plugins/vlc_http
Add /trunk/global-lib/jrebel.jar Yea.... So for questions: What are some factors for when a commit becomes too large ( non-obvious stuff )? How can you prevent such commits? Please give specifics What about when your in semi-early stages of development when things are moving quickly? Are huge commits still okay? | To me, I have trouble trying to make "small commits" since I forget or create something that creates something else that creates something else. That is a problem. It sounds like you need to learn to break down your work into smaller, more manageable chunks. The problem with large commits are: In a multi-person project, a greater chance that your commits will cause conflicts for other developers to resolve. It is harder to accurately describe what has been done in log messages. It is harder to track the order that changes were made, and hence to understand the cause of problems. It increases the probability of losing a lot of uncommitted work. Sometimes large commits are unavoidable; e.g. if you have to change a major API. But that's not normally the case. And if you do find yourself in this situation, it is probably a good idea to create a branch and do your work in there ... with lots of small commits ... and reintegrate when you are finished. (Another case is when you do an initial import, but that's NOT problematical from the perspective of the issues listed above.) | {
"source": [
"https://softwareengineering.stackexchange.com/questions/10793",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/66/"
]
} |
10,816 | I have just completed a course on computability and logic which was an interesting course. The lecturer recommend a few books on his slides, which include "Gödel, Escher, Bach". I can see the book is quite famous, and looks very interesting. But I have a few questions to ask regarding its content. Is the content still valid today? I guess most theoretical stuff doesn't change over night, but are there any major points which no longer hold today that I should be aware of? I assume we actually HAVE made some progress in the last 30 years or so. Can any of you recommend a book on the subject which includes this progress (logic, AI, computability)? Another question: Do I have to know about Escher and Bach? | Is the content still valid today? I guess most theoretical stuff don't change over night, but is there some major points which does not hold today which I should be aware of? The content is logic and math. It doesn't change in any substantial way, not only over night. It will be valid forever. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/10816",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/4954/"
]
} |
10,849 | I'm working with a new team that has historically not done ANY unit testing. My goal is for the team to eventually employ TDD (Test Driven Development) as their natural process. But since TDD is such a radical mind shift for a non-unit testing team I thought I would just start off with writing unit tests after coding. Has anyone been in a similar situation? What's an effective way to get a team to be comfortable with TDD when they've not done any unit testing? Does it make sense to do this in a couple of steps? Or should we dive right in and face all the growing pains at once?? EDIT Just for clarification, there is no one on the team (other than myself) who has ANY unit testing exposure/experience. And we are planning on using the unit testing functionality built into Visual Studio. | Practice on existing bugs/defects. This is a really tough situation. I've never gone all the way to TDD from nothing before, but in my experience, getting a team to go from no unit tests to proactively writing them has been a very "one step at a time" approach. First, get them comfortable writing unit tests and knowing really what they are and their benefits. For my teams, it's been best to write unit tests for existing bugs. Current bugs in systems have two things that you need to teach people to write unit tests well: an expected precondition and postcondition an outcome that currently is not what is expected and violates that precondition/postcondition This gives members very concrete practice examples. They can write a test before they fix the bug, so that it fails. Then, they can fix the code so that it passes, and fixes the bug. Once they're comfortable with this, then you can get them the rest of the way so that they can write unit tests with no code up-front and then write new code to get their tests to pass. I think the trick is to give them something to practice on where there are clear method pre/post-conditions. If requirements for methods are fuzzy, it's hard for even experienced TDD people to know exactly where to start. Take it a step at time and you'll get there. Good luck! | {
"source": [
"https://softwareengineering.stackexchange.com/questions/10849",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/101/"
]
} |
10,989 | In a debate with Andrew Tanenbaum over microkernel vs. monolithic operating system architecture, Linus Torvalds said, Portability is for people who cannot write new programs. What did he mean by that? | As Linus writes in the debate, it's with tongue in cheek (i.e. not to be taken too seriously). Then, he goes on to explain that while portability is good thing, it's also a trade-off; unportable code can be much simpler. That is, instead of making the code perfectly portable, just make it simple and portable enough ("adhere to a portable API"), and then if it needs to be ported, rewrite it as needed. Making code perfectly portable can also seen as a form of premature optimization - often more harm than good. Of course that's not possible if you can't write new programs and have to stick with the original one :) | {
"source": [
"https://softwareengineering.stackexchange.com/questions/10989",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/4522/"
]
} |
11,007 | Most of my fellow students that I've talked to claim that aiming for good grades is useless as the companies don't care about them when hiring programmers. To them, it's enough to have simply attended courses which may be important, and that's that. Is this true? Are university grades useless when leaving campus, or do employers ask to see them for an interview? | Incorrect. Grades are important especially if you have no or little professional programming experience. It's the bulk of your resume until you have professional experience. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/11007",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/2210/"
]
} |
11,120 | The company I am leaving has asked that I make myself available to answer questions and/or debug programs occasionally should the need arise. I'm not opposed to this. After searching google for some kind of standard contract for this sort of thing, I didn't see any. Is there a standard contract for this sort of thing that you use? Are there any other steps I should take to ensure this kind of arrangement works smoothly? | You are in a good position here as your old company has asked you for help. Take the following steps; Get the agreement of your new employer Decide on how much time you are prepared to spend on this and when you want to spend that time. Pick a sensible hourly rate - ask a recruitment agency in your area what the average is and charge that. Agree on how much notice your old employer must give you for a request for work. Agree when and how your old employer can contact you. You don't want them ringing you at your new employer so e-mail conversations is probably best. Be prepared to negotiate - while you are in a good position if you ask for too high a rate (for example) they might suddenly find that they have the skills in-house after all. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/11120",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/5040/"
]
} |
11,121 | Often some element of a path is variable, but the path as a whole needs to be documented in some manner for later programmers. Especially when using *nix systems, almost any character is a valid symbol for the path. Given that, I would like to delimit the variable portions of my path to prevent misunderstanding, but in a way that also survives best across different display environments (especially the browser). Methods I have seen include (example path in users home directory): /home/<username>/foo - needs special escape for web browser context /home/your_username/foo - unfortunately the variable element tends to be overlooked /home/{username}/foo /home/:username/foo Which have you seen most often or had the most success with and why? If a double delimiter method (which seems to be the most common/successful), what lead your choice of delimiters? | You are in a good position here as your old company has asked you for help. Take the following steps; Get the agreement of your new employer Decide on how much time you are prepared to spend on this and when you want to spend that time. Pick a sensible hourly rate - ask a recruitment agency in your area what the average is and charge that. Agree on how much notice your old employer must give you for a request for work. Agree when and how your old employer can contact you. You don't want them ringing you at your new employer so e-mail conversations is probably best. Be prepared to negotiate - while you are in a good position if you ask for too high a rate (for example) they might suddenly find that they have the skills in-house after all. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/11121",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/5038/"
]
} |
11,188 | I believe that an agile approach is best for projects where the requirements are fuzzy and a lot of interaction is required to help shape the end user's ideas. However... In my professional work, I keep ending up at companies where an "agile" approach is used as an excuse as to why no effort was put into an up front design; when the requirements are well understood. I can't help but thinking that if the agile approach wasn't around, I'd be sitting here with a nice high-level specification and not having to revisit the same screen and functionality every second day when something else crops up or so and so hadn't thought of that. Are the benefits of agile methodologies really enough to outweigh the excuse for being lame it gives to cowboy technical leads? Update: Ironically I'm now a certified Scrum Master. One of the papers presented on the Scrum course observed that the best development process was one where there was a single expert or guru making the design decisions, however that has obvious weaknesses. Scrum shifts the responsibility for producing quality software to the "Team" which means a sub-standard team can get away with churning out spaghetti which I guess is no different to other Agile and non-Agile development processes. | I believe if you're using Agile development as an excuse for cowboy-style programming, then you're not really following Agile development. Cowboys will always be cowboys, no matter what process you give them. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/11188",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/5066/"
]
} |
11,312 | I've been tasked with creating a fun and relaxing environment, one thing I know that I want is ergonomic mice and keyboards, others have suggested exercise balls and bands. What is it that every programmer needs while working? What might not be necessary but would be nice to have anyway? Note: this question was asked previously, but has been recommended to be posted here. See this link for the previous responses: https://stackoverflow.com/questions/3911911/stuff-every-programmer-needs-while-working-closed | Dual monitors | {
"source": [
"https://softwareengineering.stackexchange.com/questions/11312",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/5143/"
]
} |
11,512 | I am curious if anyone knows of any methodologies that are significantly different (not a recombination) and I would especially appreciate anyone who brought forward any experience with alternatives. | Wikipedia lists these as methodologies/development processes : Agile - based on iterative and incremental development, where requirements and solutions evolve through collaboration between self-organizing, cross-functional teams. Cleanroom - the focus of the Cleanroom process is on defect prevention, rather than defect removal. Iterative - a cyclic software development process developed in response to the weaknesses of the waterfall model. It starts with an initial planning and ends with deployment with the cyclic interactions in between. RAD - uses minimal planning in favor of rapid prototyping. The "planning" of software developed using RAD is interleaved with writing the software itself. RUP - The Rational Unified Process (RUP) is an adaptable iterative software development process framework, intended to be tailored by selecting the elements of the process that are appropriate. Spiral - combining elements of both design and prototyping-in-stages, in an effort to combine advantages of top-down and bottom-up concepts. This model of development combines the features of the prototyping model and the waterfall model. Waterfall - sequential through the phases of Conception, Initiation, Analysis, Design, Construction, Testing and Maintenance. Lean - a translation of Lean manufacturing and Lean IT principles and practices to the software development domain; everything not adding value to the customer is considered to be waste. V-Model - Instead of moving down in a linear way, the process steps are bent upwards after the coding phase, to form the typical V shape. The V-Model demonstrates the relationships between each phase of the development life cycle and its associated phase of testing. TDD - relies on the repetition of a very short development cycle: first the developer writes a failing automated test case that defines a desired improvement or new function, then produces code to pass that test and finally refactors the new code to acceptable standards. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/11512",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/3792/"
]
} |
11,523 | I often have to explain technical things and technical decisions to my extremely non technical manager and I'm pretty awful at it. What are good ways to essential dumb things down for the rest of the world who don't have a passion for programming? Example questions I've been asked: Why are you using Django instead of Java (Didn't accept that it was cheaper either) Asking me to rephrase things in non technical words, my sentence was "Certain HTML tags are not allowed". How can I possibly dumb that down? Other stuff that makes perfect sense to me, but is just so basic I don't know how to explain it Why this, why that, why everything! Also, how do I tell my manager to look the basic stuff up on Google, like "What is Pylons?" | I tend to use analogies. Take whatever the topic is, and think of something completely non-technical that they would understand, and explain it to them that way. Best example I can think of offhand is if I need to explain object orientation, I'll explain it using a deck of cards. Or, when I was trying to explain the idea of wireless internet to my great aunt (who's never used a computer), I used cordless phones to explain it. I've yet to come across any topic I can't dumb-down this way. Update I see this continues to get upvoted, so here's some of how I'd explain OOP with a deck of cards: A card is essentially a copy of the same object, a piece of stiff paper. Each card has a set of properties (value [A-K], suit, face up/down), which may or may not be unique. A card can be used in many different ways, without altering anything about the card (held in a hand, put in a deck, played on the field, etc.) If you want to get into interfaces: A card must conform to certain standards, such as size and shape. If you do something to one card, that doesn't affect any other card. A deck is a "container" object, which holds <= 52 card instances. The deck can have various operations done on it, such as shuffle, show the top card, draw 5, etc. The deck doesn't need to know or care about a card's value/suit, only that it is a card. A hand is another object, with a certain number of cards, and its own set of operations (play, add, remove, sort) | {
"source": [
"https://softwareengineering.stackexchange.com/questions/11523",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/3382/"
]
} |
11,846 | I’m a long time developer (I’m 49) but rather new to object oriented development. I’ve been reading about OO since Bertrand Meyer’s Eiffel, but have done really little OO programming. The point is every book on OO design starts with an example of a boat, car or whatever common object we use very often, and they start adding attributes and methods, and explaining how they model the state of the object and what can be done with it. So they usually go something like "the better the model the better it represents the object in the application and the better it all comes out". So far so good, but, on the other hand, I’ve found several authors that give recipes such as “a class should fit in just a single page” (I would add “on what monitor size?" now that we try not to print code!). Take for example a PurchaseOrder class, that has a finite state machine controlling its behavior and a collection of PurchaseOrderItem , one of the arguments here at work is that we should use a PurchaseOrder simple class, with some methods (little more than a data class), and have a PurchaseOrderFSM “expert class” that handles the finite state machine for the PurchaseOrder . I would say that falls in the “Feature Envy” or “Inappropriate Intimacy” classification of Jeff Atwood's Code Smells post on Coding Horror. I’d just call it common sense. If I can issue, approve or cancel my real purchase order, then the PurchaseOrder class should have issuePO , approvePO and cancelPO methods. Doesn’t that goes with the “maximize cohesion” and “minimize coupling” age old principles that I understand as cornerstones of OO? Besides, doesn’t that helps toward the maintainability of the class? | A class should use the Single Responsibility Principle. Most very large classes I have seen do to many things which is why they are too large. Look at each method and code decide should it be in this class or separate, duplicate code is a hint. You might have an issuePO method but does it contain 10 lines of data access code for example? That code probably shouldn't be there. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/11846",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/5332/"
]
} |
11,856 | I was involved in a programming discussion today where I made some statements that basically assumed axiomatically that circular references (between modules, classes, whatever) are generally bad. Once I got through with my pitch, my coworker asked, "what's wrong with circular references?" I've got strong feelings on this, but it's hard for me to verbalize concisely and concretely. Any explanation that I may come up with tends to rely on other items that I too consider axioms ("can't use in isolation, so can't test", "unknown/undefined behavior as state mutates in the participating objects", etc.), but I'd love to hear a concise reason for why circular references are bad that don't take the kinds of leaps of faith that my own brain does, having spent many hours over the years untangling them to understand, fix, and extend various bits of code. Edit: I am not asking about homogenous circular references, like those in a doubly-linked list or pointer-to-parent. This question is really asking about "larger scope" circular references, like libA calling libB which calls back to libA. Substitute 'module' for 'lib' if you like. Thanks for all of the answers so far! | There are a great many things wrong with circular references: Circular class references create high coupling ; both classes must be recompiled every time either of them is changed. Circular assembly references prevent static linking , because B depends on A but A cannot be assembled until B is complete. Circular object references can crash naïve recursive algorithms (such as serializers, visitors and pretty-printers) with stack overflows. The more advanced algorithms will have cycle detection and will merely fail with a more descriptive exception/error message. Circular object references also make dependency injection impossible , significantly reducing the testability of your system. Objects with a very large number of circular references are often God Objects . Even if they are not, they have a tendency to lead to Spaghetti Code . Circular entity references (especially in databases, but also in domain models) prevent the use of non-nullability constraints , which may eventually lead to data corruption or at least inconsistency. Circular references in general are simply confusing and drastically increase the cognitive load when attempting to understand how a program functions. Please, think of the children; avoid circular references whenever you can. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/11856",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/2129/"
]
} |
12,070 | If applying for a new job, would you put your Stack Overflow profile link on your resume? This would show the employer you're an active member of the development community and also provide an insight into your knowledge + how well you convey your ideas. However it would feel a bit gimmicky to me? | It Depends When I was looking for a job a month ago, I didn't put a link to SO on my resume, but I did mention that I participate on SO and added a link to my blog that contains the SO "flair" on the About page. At that point I had about 3000 rep. I wouldn't try to leverage rep, but I would leverage intelligent participation. If you act like a moron on SO and draw attention to it, that's obviously a bad move. But if you say "hey, I participate on this dedicated Q&A site" and you have been giving good answers and asking smart questions, it can work in your favour as it shows passion for your work and fellow developers. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/12070",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/4630/"
]
} |
12,182 | I've seen a lot of other framework/library developers throw the phrase 'we write opinionated software' around, but in practical terms, what does that really mean? Does it mean that the author of the 'Opinionated Framework X' says that because they write code a certain way, you should be writing the same type of code that they write? Isn't that a bit pretentious? | The framework imposes a certain way of working on you. Put another way, there's clearly one right way of using the framework which is nice and easy, and any other way of using the framework makes your life difficult. I'm no Rails expert, but I'm told that it's opinionated because it's awesome for simple CRUD stuff, but when you try deviate from the "Rails way" things get tough. (This isn't necessarily a bad thing; I don't mean it as criticism.) | {
"source": [
"https://softwareengineering.stackexchange.com/questions/12182",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/4385/"
]
} |
12,189 | I am looking into learning Python for web development. Assuming I already have some basic web development experience with Java (JSP/Servlets), I'm already familiar with web design (HTML, CSS, JS), basic programming concepts and that I am completely new to Python, how do I go about learning Python in a structured manner that will eventually lead me to web development with Python and Django? I'm not in a hurry to make web applications in Python so I really want to learn it thoroughly so as not to leave any gaps in my knowledge of the technologies involving web development in Python. Are there any books, resource or techniques to help me in my endeavor? In what order should I do/read them? UPDATE: When I say learning in a structured manner, I mean starting out from the basics then learning the advanced stuff without leaving some of the important details/features that Python has to offer. I want to know how to apply the things that I already know in programming to Python. | First learn Python well Here are some online resources for learning Python The Python Tutorial Wiki-Book Byte of Python Building Skills in Python Version 2.5 Python Free Online Ebooks Python Bibliotheca Think Python Data Structures and Algorithms in Python How to Think Like a Computer Scientist: Learning with Python Python for Fun Invent Your Own Computer Games With Python Learn Python The Hard Way Thinking in Python Snake Wrangling For Kids For Django you can refer The Django book What I suggest is The Python Tutorial Wiki-Book The Django Book Also check out this video | {
"source": [
"https://softwareengineering.stackexchange.com/questions/12189",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/500/"
]
} |
12,279 | How necessary or important is it? I try to keep a running list of blogs or sites to follow, but a lot of the time I pull up someone's profile and notice there isn't anything there . Is it really important? I understand are different levels of programming (from C/C++ system programmers to Rails and even Haskell and J) and not everyone works in a language easily worked with for web based applications. Not everything is web-centric, however with the advent of many popular and sometimes free services I don't think it's unreasonable to expect a majority of programmers to have a personal site. | To what end? Why would you want one? Some people use it as a way to sell themselves to employers. Others do it to help enrich the community and share information. Maybe both. I think you need to ask why you want a website before you determine how important it is. If you want to sell yourself to employers, then yes a site or blog can definitely be of help. If you want to share something with the community then yes, a blog is a great way to do that. But you should only contribute if you want to contribute. Don't feel compelled to have a blog just because all the other "good developers" have one. If you like to write and have something good to say, go for it. Otherwise don't bother. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/12279",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/2654/"
]
} |
12,401 | [Disclaimer: this question is subjective, but I would prefer getting answers backed by facts and/or reflexions] I think everyone knows about the Robustness Principle , usually summed up by Postel's Law: Be conservative in what you send; be liberal in what you accept. I would agree that for the design of a widespread communication protocol this may make sense (with the goal of allowing easy extension), however I have always thought that its application to HTML / CSS was a total failure, each browser implementing its own silent tweak detection / behavior, making it near impossible to obtain a consistent rendering across multiple browsers. I do notice though that there the RFC of the TCP protocol deems "Silent Failure" acceptable unless otherwise specified... which is an interesting behavior, to say the least. There are other examples of the application of this principle throughout the software trade that regularly pop up because they have bitten developpers, from the top off my head: Javascript semi-colon insertion C (silent) builtin conversions (which would not be so bad if it did not truncated...) and there are tools to help implement "smart" behavior: name matching phonetic algorithms ( Double Metaphone ) string distances algorithms ( Levenshtein distance ) However I find that this approach, while it may be helpful when dealing with non-technical users or to help users in the process of error recovery, has some drawbacks when applied to the design of library/classes interface: it is somewhat subjective whether the algorithm guesses "right", and thus it may go against the Principle of Least Astonishment it makes the implementation more difficult, thus more chances to introduce bugs (violation of YAGNI ?) it makes the behavior more susceptible to change, as any modification of the "guess" routine may break old programs, nearly excluding refactoring possibilities... from the start! And this is what led me to the following question: When designing an interface (library, class, message), do you lean toward the robustness principle or not ? I myself tend to be quite strict, using extensive input validation on my interfaces, and I was wondering if I was perhaps too strict. | I would say robustness when it doesn't introduce ambiguities . For example:
When parsing a comma separated list, whether or not there's a space before/after the comma doesn't change the semantic meaning. When parsing a string guid it should accept any number of the common formats (with or without dashes, with or without surrounding curly braces). Most programming languages are robust with white space usage. Specifically everywhere that it doesn't affect the meaning of code. Even in Python where whitespace is relevant, it's still flexible when you're inside of a list or dictionary declaration. I definitely agree that if something can be interpreted multiple ways or if it's not 100% clear what was meant then too much robustness can end up being a pain though, but there's much room for robustness without being ambiguous. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/12401",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/4853/"
]
} |
12,423 | I have heard this time and again and I am trying to understand and validate the idea that FP and OO are orthogonal. First of all, what does it mean for 2 concepts to be orthogonal ? FP encourages immutability and purity as much as possible.
and OO seems like something that is built for state and mutation(a slightly organized version of imperative programming?). And I do realize that objects can be immutable. But OO seems to imply state/change to me. They seem like opposites. Does that meant they are orthogonal ? A language like Scala makes it easy to do OO and FP both, does this affect the orthogonality of the 2 methods ? | The term "orthogonal" comes from maths, where it has a synonym: "perpendicular". In this context, you could understand it as "the two things have nothing to do with each other." When people compare FP and OO they often confuse two separate axes. On the one hand you have functional programming versus imperative programming. Jonas gives a good comparison of the two. The one-sentence version says that "data flow versus control flow". The other axis is data abstraction. Languages like Haskell use abstract data types to, well, abstract data. Smalltalk uses objects, which fuse the data and operations on that data into a single unit. William Cook explains better than I could in his paper On Understanding Data Abstraction, Revisited . It's perfectly understandable that most people end up thinking that FP and OO are opposites: most OO languages are imperative, so if you compare, say, Haskell and Java, you have data flow + ADT versus control flow + object. But there are other possibilities! Matthias Felleisen explains how to happily marry FP and OO in his talk Functional Objects . | {
"source": [
"https://softwareengineering.stackexchange.com/questions/12423",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/2966/"
]
} |
12,439 | It's been generally accepted in the OO community that one should "favor composition over inheritance". On the other hand, inheritance does provide both polymorphism and a straightforward, terse way of delegating everything to a base class unless explicitly overridden and is therefore extremely convenient and useful. Delegation can often (though not always) be verbose and brittle. The most obvious and IMHO surest sign of inheritance abuse is violation of the Liskov Substitution Principle . What are some other signs that inheritance is The Wrong Tool for the Job even if it seems convenient? | When inheriting just to get functionality, you're probably abusing inheritance. This leads to the creation of a God Object . Inheritance itself is not a problem as long as you see a real relation between the classes (like the classic examples, such as Dog extends Animal) and you're not putting methods on the parent class that doesn't make sense on some of it's children (for example, adding a Bark() method in the Animal class wouldn't make any sense in a Fish class that extends Animal). If a class needs functionality, use composition (perhaps injecting the functionality provider into the constructor?). If a class needs TO BE like other, then use inheritance. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/12439",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/1468/"
]
} |
12,589 | When choosing what we want to study, and do with our careers and lives, we all have some expectations of what it is going to be like. Now that I've been in the industry for almost a decade, I've been reflecting a bit on what I thought (back when I was studying Computer Science) programming working life was going to be like, and how it's actually turning out to be. My two biggest shocks (or should I say, broken expectations) by far are the sheer amount of maintenance work involved in software, and the overall lack of professionalism: Maintenance : At uni, we were all told that the majority of software work is maintenance of existing systems. So I knew to expect this in the abstract. But I never imagined exactly how overwhelming this would turn out to be. Perhaps it's something I mentally glazed over, and hoped I'd be building cool new stuff from scratch a lot more. But it really is the case that most jobs are overwhelmingly maintenance, bug fixing, and support oriented. Lack of professionalism : At uni, I always had the impression that commercial software work is very process-oriented and stringently engineered. I had images of ISO processes, reams of technical documentation, every feature and bug being strictly documented, and a generally professional environment. It came as a huge shock to realise that most software companies operate no differently to a team of students working on a large semester-long project. And I've worked in both the small agile hack shop, and the medium sized corporate enterprise. While I wouldn't say that it's always been outright "unprofessional", it definitely feels like the software industry (on the whole) is far from the strong engineering discipline that I expected it to be. Has anyone else had similar experiences to this? What are the ways in which your expectations of what our profession would be like were different to the reality? | I feel you man. I just graduated little over a year ago in fact, jumped on the first job offer that came my way and got the biggest shock of my life. Things I didn't expect: School stress and Work stress aren't the same - The stress of working on a school project with friends, or working solo, even with that looming thesis deadline or special project defense does not compare to the stress of seemingly unreasonable work deadlines, communication problems, (a little of office politics) and crunch times. Lack of Best Practices - Same as your experience on professionalism. Before taking my first job and during my training period, I rushed off reviewing and reading about best practices in both programming and software engineering. These aren't followed as well as they should for impractical and, to be fair, practical reasons. And sometimes, your knowledge counts very little against others who are merely afraid of the unknown and treat these practices with disdain. What they taught in school was just the tip of the iceberg - Thinking that what I learned self-studying and from classes was enough to get me through, I was shocked to say the least as I stared dumbfounded at the first piece of code I was supposed to maintain. A lot of the skills I use now were learned on the job or during my job that I keep on wondering if I could've made it without a college degree at all. XD The Importance of Communication - Made me realize what all those English classes were for. Before the real world, I could not see the relevance of having three to four different English classes in college when it's been taught since we were in the first grade. You're no use in your job when you can talk to a computer but fail to talk to people. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/12589",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/5064/"
]
} |
12,777 | I've heard it said that the inclusion of null references in programming languages is the "billion dollar mistake". But why? Sure, they can cause NullReferenceExceptions, but so what? Any element of the language can be a source of errors if used improperly. And what's the alternative? I suppose instead of saying this: Customer c = Customer.GetByLastName("Goodman"); // returns null if not found
if (c != null)
{
Console.WriteLine(c.FirstName + " " + c.LastName + " is awesome!");
}
else { Console.WriteLine("There was no customer named Goodman. How lame!"); } You could say this: if (Customer.ExistsWithLastName("Goodman"))
{
Customer c = Customer.GetByLastName("Goodman") // throws error if not found
Console.WriteLine(c.FirstName + " " + c.LastName + " is awesome!");
}
else { Console.WriteLine("There was no customer named Goodman. How lame!"); } But how is that better? Either way, if you forget to check that the customer exists, you get an exception. I suppose that a CustomerNotFoundException is a bit easier to debug than a NullReferenceException by virtue of being more descriptive. Is that all there is to it? | null is evil There is a presentation on InfoQ on this topic: Null References: The Billion Dollar Mistake by Tony Hoare Option type The alternative from functional programming is using an Option type , that can contain SOME value or NONE . A good article The “Option” Pattern that discuss the Option type and provide an implementation of it for Java. I have also found a bug-report for Java about this issue: Add Nice Option types to Java to prevent NullPointerExceptions . The requested feature was introduced in Java 8. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/12777",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/151/"
]
} |
12,807 | There is a colleague of mine who constantly writes: if (someBool == true) It drives me up the wall! Should I make a big deal of it or just drop it? | It's only redundant code, not life or death. However.... If it's happening a lot, it could be a problem with how someBool is being named. A good name can go a long way towards eliminating the need for the ==true if(IsSomeCondition) or if(hasCondition) or if(somethingExists) for example. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/12807",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/4526/"
]
} |
12,933 | I was wondering what is the best work rhythm for the job a programmer does? I am coding all day long and sometimes I get stuck in a problem and it keeps me occupied a few hours before I realize that maybe I need a break. Some say frequent and short brakes help you but sometimes when I am focused on a problem I feel like a break would not help, but rather loose my focus. So how often should a break be taken, and how long? The more basic question regarding this issue is comes from the fact that, you can get tons of "good ideas" ('promodo' for instance) on the net, that promise you will be more effective in whatever you do. Are these principles good or, this is something everybody should decide for himself?
I wonder if any of them can accomplish what it promises! I mean what they promise is that (if the conditions are met) it works for everybody . Are there really such principles? And if there are, what are these and how can we find them? | The most important thing for good programming is good sleep. It does not matter what time you program at or for how long. Whether you drink caffeine loaded drink or munch on chocolate all the time. What matters is that you have a good long peaceful sleep every few days. As someone with a young child I can tell you for a fact that lack of deep rest saps your ability to be creative. There have been studies that have shown that artist that are the most creative require the most sleep and workers in non demanding repetitive jobs the least. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/12933",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/5705/"
]
} |
13,091 | Which of these is better for maintainability? if (byteArrayVariable != null)
if (byteArrayVariable .Length != 0)
//Do something with byteArrayVariable OR if ((byteArrayVariable != null) && (byteArrayVariable.Length != 0))
//Do something with byteArrayVariable I prefer reading and writing the second, but I recall reading in code complete that doing things like that is bad for maintainability. This is because you are relying on the language to not evaluate the second part of the if if the first part is false and not all languages do that. (The second part will throw an exception if evaluated with a null byteArrayVariable .) I don't know if that is really something to worry about or not, and I would like general feedback on the question. Thanks. | I think the second form is fine, and also more clearly represents what you're trying to do. You say that... I recall reading in code complete that doing things like that is bad for maintainability. This is because you are relying on the language to not evaluate the second part of the if if the first part is false and not all languages do that. It doesn't matter if all languages do that. You're writing in one particular language, so it only matters if that language does that. Otherwise, you're essentially saying that you shouldn't use the features of a particular language because other languages might not support those features, and that's just silly. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/13091",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/71/"
]
} |
13,470 | I'm thinking about leaving my current employer and wondering how one goes about that process in the programming world. We have a lot of projects on the go at the moment and I'm the only developer. We have 4-5 projects that are fairly big and need to be done in the next few months and even a few longer than that. I'm leaving because I'm the only employee and that's doing me no good. I'm young and want to learn, so a team would be nice. It's also too much work and the company is managed horribly. I have no contract to worry about so I could theoretically quit and just not come back without notice. Just wondering how this is normally handled. Should I write a resignation letter How much notice should I give Should I give a reason for leaving Should I go to my boss who is the main reason I'm leaving or go to his boss? Overview of replies From the feedback here, it looks as though it's best to give 2-4 weeks of notice, and present a written resignation. Don't go into detail explaining why you're leaving in most cases. Don't burn bridges. Be professional. | Assuming that you've made your decision to leave, you should put it in writing. Whether this is an actual letter, an e-mail or a form you fill out will depend on the company and culture, but it should be written down and not a phone call, text message or even just face to face. If you do one of the latter things it's only polite to follow it up in writing. The amount of notice should be in your contract - assuming you have one. Even if you didn't sign the contract you should abide by its terms. By working and getting paid you and the company are working to that contract even if it's not "official". If nothing else you'll be seen to be doing "the right thing" and it will be harder for your employer to get you to work longer. If they want you to leave straight away you still should get paid as though you were working. You don't need to give any reasons for your decision. You should leave all files etc. you've worked on so that they are accessible to your manager, co-workers and anyone who follows you. A short document explaining what's what would be polite. Don't delete anything. The files/data aren't yours they are your employers. Once you've made your leaving official you should then talk to managers, co-workers etc. about how you can handle the hand over of information. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/13470",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/3382/"
]
} |
13,616 | I happened to know some system admin and according to him, testing guys are not given preferences in an organization in comparison to developers. No doubt software releases are not possible without testers but I've never laid my hands on testing so not much aware of it. No offense intended. | In my experience, unfortunately, they are often treated like second class employees and even worse a frivolous perk for programmers. It stems from a number of things: When the testers are doing their jobs correctly, it is easy for everyone but the programmers to forget they even exist. Much like a network admin, you only notice them when they are not doing their jobs, or doing them badly. So from the perspective of the rest of the organization, they are remembered only for their mistakes. It is mistakenly seen as an entry-level job for people who aspire to be programmers, but aren't qualified yet for those jobs. In fact, at one company I worked they were given Jr. Programmer job titles despite their pleas to get a Q&A job title. Even the fact that they were in a QA department wasn't enough to get HR to budge on that. Because of #2, it is assumed that testers are all entry-level folks, and should be paid accordingly. Nobody likes to be criticized, and it is all too common for defensive programmers to dislike testers because their jobs require them to point out programmer mistakes all day. As a manager, I was constantly on a PR mission to remind programmers that the QA team was there to make them look good, not narc them out. It tends to be a job people get into by accident and not choice, at least initially. I don't remember any degree plan at any of the schools I attended that prepared people for software Q&A. They do exist, but usually at the lower-end vocational schools, which only contributes to the idea that they are less skilled professionals. Testing jobs are much more likely than programming jobs to be sent offshore. At least the programmers can argue that it is more efficient to communicate design needs locally and that it is valuable to keep the knowledge of how the company's flagship app works inside the company. Testing, however, is much easier to modularize and thus easier to outsource. For all of the reasons above, testers tend to see the writing on the wall and move into other jobs (like programming), especially the really good ones. This means that most testing jobs tend to get staffed with more entry level people who haven't burned out on it yet or moved on to other things, which unfortunately reinforces several of the above ideas. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/13616",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/1368/"
]
} |
13,623 | Suppose I give my developers a screaming fast machine. WPF-based VS2010 loads very quickly. The developer then creates a WPF or WPF/e application that runs fine on his box, but much slower in the real world. This question has two parts... 1) If I give a developer a slower machine, does that mean that the resulting code may be faster or more efficient? 2) What can I do to give my developers a fast IDE experience, while giving 'typical' runtime experiences? Update: For the record, I'm preparing my even-handed response to management. This isn't my idea, and you folks are helping me correct the misguided requests of my client. Thanks for giving me more ammunition, and references to where and when to approach this. I've +1'ed valid use cases such as: - specific server side programming optimizations - test labs - the possibly buying a better server instead of top of the line graphics cards | The answer is (I'll be bold and say) always NO . Develop on the best you can get with your budget, and test on the min-max spec range of equipment you'll deploy to. There's emulators, virtual machines, actual machines with testers that can all test performance to see if it's a factor. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/13623",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/175/"
]
} |
13,676 | I had many bosses, each one had a different approach about allowing or not use of Windows Live Messenger, Facebook, and many other Internet sites. Of course Internet is really needed to research about the best way to solve a specific task. Sometimes you could have a friend online, also a programmer, who knows better about something. For some managers, internet access would slow down project progress, and on the other hand, allow people to interact and find out brand new solutions. What would you do? | I wouldn't make it an issue unless it became a problem. I prefer to treat my employees as adults and assume they will act professionally unless there is evidence to the contrary. For example, if someone is continually missing deadlines without a good reason I might check in on them once in a while and if they are wasting time online, THEN I would deal with that individual as needed. Also, since none of my employees are paid hourly, I don't see the sense in policing every minute they spend at the office as long as they are getting their work done. The exception might be if they were doing something online that is otherwise problematic (porn, leaking company secrets, badmouthing the company publicly, etc.) For those things we would have specific policies against it and deal with infractions also on an individual basis. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/13676",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/5934/"
]
} |
13,782 | I've worked at two companies, who each had a different methodology when it came to code reviews: In the first company, a code review was conducted by the team leaders and was required after the completion of every module. However, in the second company, team leaders weren't required to conduct any code reviews, and just checked for functionality and design issues. So I am confused. Is the code review process really needed? If it is, why? And if it isn't, why not? | I personally think that every piece of code should go through a code review, it doesn't matter if you are junior or senior developer. Why? For starters your title doesn't state anything about how you develop, and a senior developer could learn something from the junior. At our company we shift around so one of the other members of the team review your code...mostly we are teamed a "junior" and a senior together, so all the stuff that doesn't get said on a daily basis can be caught in a follow up. If the senior doesn't like the junior code he should listen to why the junior did as he did and look at it and see if that's a feasible solution that might be used in the future...it's a matter of getting wiser no matter who you are. One important thing about code review is not being too nice, if you are being a nice guy you'll just allow more and more messy code to evolve in the system. Just as yesterday I started reworking a complete application that a former employed juniordeveloper wrote, and my god that code could have needed a review before he left. I don't see why it should be the teamleader only doing reviews but it requires a person that's not affraid of picking a "fight" over a piece of poorly developed code, and it has to be a person that cares about how the code should be. Not all companies hire people that actually care about what they do, and those bad eggs should IMO not be allowed to do code reviews as they are likely just to shrug their shoulders and say "OK" to bad code. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/13782",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/3993/"
]
} |
13,798 | Following reading the latest CodeProject newsletter, I came across this article on bitwise operations . It makes for interesting reading, and I can certainly see the benefit of checking if an integer is even or odd, but testing if the n-th bit is set? What can possibly be the advantages of this? | Bitwise operations are absolutely essential when programming hardware registers in embedded systems. For example every processor that I have ever used has one or more registers (usually a specific memory address) that control whether an interrupt is enabled or disabled. To allow an interrupt to fire the usual process is to set the enable bit for that one interrupt type while, most importantly, not modifying any of the other bits in the register. When an interrupt fires it typically sets a bit in a status register so that a single service routine can determine the precise reason for the interrupt. Testing the individual bits allows for a fast decode of the interrupt source. In many embedded systems the total RAM available may be 64, 128 or 256 BYTES (that is Bytes not kilobytes or megabytes) In this environment it is common to use one byte to store multiple data items, boolean flags etc. and then use bit operations to set and read these. I have, for a number of years been working with a satellite communications system where the message payload is 10.5 bytes. To make the best use of this data packet the information must be packed into the data block without leaving any unused bits between the fields. This means making extensive use of bitwise and shift operators to take the information values and pack them into the payload being transmitted. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/13798",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/4630/"
]
} |
14,162 | I'm trying to decide if I need to reassess my defect-tracking process for my home-grown projects. For the last several years, I really just track defects using TODO tags in the code, and keeping track of them in a specific view (I use Eclipse, which has a decent tagging system). Unfortunately, I'm starting to wonder if this system is unsustainable. The defects I find are typically associated with a snippet of code I'm working on; bugs which are not immediately understood tend to be forgotten, or ignored. I wrote an application for my wife which has had a severe defect for almost 9 months, and I keep forgetting to fix it. What mechanism do you use to track defects in your personal projects? Do you have a specific system, or a process for prioritizing and managing them? | Fogbugz (free individual license) if its a longish project or a simple to do list (using Google tasks) | {
"source": [
"https://softwareengineering.stackexchange.com/questions/14162",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/2294/"
]
} |
14,297 | I attended a software craftsmanship event a couple of weeks ago and one of the comments made was "I'm sure we all recognize bad code when we see it" and everyone nodded sagely without further discussion. This sort of thing always worries me as there's that truism that everyone thinks they're an above average driver. Although I think I can recognize bad code I'd love to learn more about what other people consider to be code smells as it's rarely discussed in detail on people's blogs and only in a handful of books. In particular I think it'd be interesting to hear about anything that's a code smell in one language but not another. I'll start off with an easy one: Code in source control that has a high proportion of commented out
code - why is it there? was it meant
to be deleted? is it a half finished
piece of work? maybe it shouldn't have
been commented out and was only done
when someone was testing something
out? Personally I find this sort of
thing really annoying even if it's just the odd line here and there, but when you see large blocks interspersed with the rest of the code it's totally unacceptable. It's
also usually an indication that the rest of
the code is likely to be of dubious
quality as well. | /* Fuck this error */ Typically found inside a nonsense try..catch block, it tends to grab my attention. Just about as well as /* Not sure what this does, but removing it breaks the build */ . A couple more things: Multiple nested complex if statements Try-catch blocks that are used to determine a logic flow on a regular basis Functions with generic names process , data , change , rework , modify Six or seven different bracing styles in 100 lines One I just found: /* Stupid database */
$conn = null;
while(!$conn) {
$conn = mysql_connect("localhost", "root", "[pass removed]");
}
/* Finally! */
echo("Connected successfully."); Right, because having to brute force your MySQL connections is the right way do things. Turns out the database was having issues with the number of connections so they kept timing out. Instead of debugging this, they simply attempted again and again until it worked. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/14297",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/1362/"
]
} |
14,335 | Similar to the question I read on Server Fault, what is the biggest mistake you've ever made in an IT related position. Some examples from friends: I needed to do some work on a production site so I decided to copy over the live database to the beta site. Pretty standard, but when I went to the beta site it was still pulling out-of-date info. OOPS! I had copied the beta database over to the live site! Thank god for backups. And for me, I created a form for an event that was to be held during a specific time range. Participants would fill out the form for a chance to win, and we would send the event organizers a CSV from the database. I went into the database, and found ONLY 1 ENTRY, MINE. Upon investigating, it appears as though I forgot an auto increment key, and because of the server setup there was no way to recover the lost data. I am aware this question is similar to ones on Stack Overflow but the ones I found seemed to receive generic answers instead of actual stories :) What is the biggest coding error/mistake ever… | Issuing a SQL UPDATE with a 'bad' WHERE Clause that matched everything. Lesson Learned: Always issue a SELECT first to see what will be changed. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/14335",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/3382/"
]
} |
14,467 | I have been hearing about this term for close to 5 years now. I have read about services from Microsoft (Azure), but I have never saw it adopted in the mainstream usage. The thing I am trying to understand is this: What happened to cloud computing? Is the infrastructure at present insufficient to implement this? Is it still in its infancy? Is it being used in other forms, like all the services Google seems to provide (plus Plus Google OS, etc)? If it has failed, then why? | Cloud Computing, like most new technologies, was painfully over-hyped by the industry media. As it matures and is adopted -- or not -- as a working strategy, it is finding its valid place in the ecosystem. It is neither a panacea for all infrastructure problems nor a failure. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/14467",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/6345/"
]
} |
14,650 | For example, being a beginner, I find a lot of inspiration and direction from reading this post by Bryan Woods . | I'm surprised no one has mentioned The Pragmatic Programmer . It's a must-read if you are at all interested in your craft. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/14650",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/1345/"
]
} |
14,720 | Imagine yourself hired by a new startup backed with few millions coming from venture capitalists. Your mission: organize the development of the next killer app . 25 developers is too much to take care of each individually, so what decision(s) you would make to motivate them? I will appreciate any answers from stock options to free cookies ;) Of course the trick here (unless you are really a manager of a such startup), is put yourself in the shoes of one of those programmers. EDIT: it's an imaginary context. The purpose of this story is to stimulate your wishes. I want to capture what motivates developers. | Here's my checklist, in no particular order: Awesome computers to develop on. At least double the power of the target user, with plenty of RAM and large/multiple monitors... ~$3 to 5k budget. Nice headphones for whoever needs them, when they prefer to work to music. Excellent development tools to work with. This depends somewhat on your target environment, but Visual Studio / Eclipse / whatever is the best for the job. This includes things like continuous integration/build servers. Fast internet access - perhaps with a caching proxy server to pre-cache things like SO, TheRegister, Reddit, etc Very few meetings - only what is absolutely necessary and a hard limit on their length (we use a timer); think 'stand-up meeting' like Scrum. Healthy atmosphere in which to work. Daylight, fresh air options, stable aircon, plants, pictures, good lighting. 10 to 20% downtime to learn something new or flex your skills a little. A water cooler for each group of desks that is regularly maintained. Market- competitive salaries with performance-related bonuses, where performance and the remuneration are clearly defined. Performance bonuses would likely be company profit share. Encourage a collaborative work ethic ; have tech debriefs to share learning, rotate people around teams to build their experience. Free drinks (non-alcoholic). A fruit basket for healthy snacks that don't ruin lunch. Establish a level of professional respect from the other parts of the business for the software development department and vice versa. This is a long-term, fuzzy target, but there are ways and means of establishing it. Clear communication to and from management of expectations and delivery on those expectations. Clear priorities for work items, reviewed regularly. Use of best practices in terms of SDLC methodologies - Agile/Scrum, etc. Clear and documented procedures on what has to be done, why and how for important stuff like release management. Whatever can be automated would be, so this is just the manual bits - there's always some. Supportive environment for when things don't go so well. No kicking people when they cause bugs, but helping them learn from their mistakes. 24x7 access to the building and remote access for when team members get inspiration outside of normal hours. Whiteboards for prototyping/thinking out loud. Celebrations of success - whether a team lunch or a trip to the Grand Prix at the weekend, it's important to recognise great effort and great results. I would not have: Nerf guns/frisbees/pool table/toys. The work environment is where we work. There's lots of fun to be had while doing the job without playing soldiers around colleagues that are trying to focus. Free food - people should take a break to go out and get something to eat. Internet censorship - I'd leave it up to the individuals to exercise their judgement. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/14720",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/-1/"
]
} |
14,744 | My first programming language was PHP ( gasp ). After that I started working with JavaScript. I've recently done work in C#. I've never once looked at low or mid level languages like C. The general consensus in the programming-community-at-large is that "a programmer who hasn't learned something like C, frankly, just can't handle programming concepts like pointers, data types, passing values by reference, etc." I do not agree. I argue that: Because high level languages are easily accessible, more "non-programmers" dive in and make a mess In order to really get anything done in a high level language, one needs to understand the same similar concepts that most proponents of "learn-low-level-first" evangelize about. Some people need to know C; those people have jobs that require them to write low to mid-level code. I'm sure C is awesome, and I'm sure there are a few bad programmers who know C. Why the bias? As a good, honest, hungry programmer, if I had to learn C (for some unforeseen reason), I would learn C. Considering the multitude of languages out there, shouldn't good programmers focus on learning what advances us? Shouldn't we learn what interests us? Should we not utilize our finite time moving forward ? Why do some programmers disagree with this? I believe that striving for excellence in what you do is the fundamental deterministic trait between good programmers and bad ones. Does anyone have any real world examples of how something written in a high level language—say Java, Pascal, PHP, or JavaScript—truly benefited from a prior knowledge of C? Examples would be most appreciated. | The advantage to knowing C is that you have a very good idea of how a computer works. Not just how your programming model works, but how memory's laid out, and suchlike. The only level below C is the assembly spoken by a particular CPU. (I'd add that knowing C also lets you appreciate how much less work you have to do in a higher level language. And hopefully an appreciation of the cost involved in working in that higher level language.) | {
"source": [
"https://softwareengineering.stackexchange.com/questions/14744",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/5497/"
]
} |
14,789 | In a now deleted question titled " What naming guidelines do you follow? ", the author says: Also I prefer to code using hungarian notation from Charles Simonyi. I've run in to several programmers who still prefer to use Hungarian, mostly of the Petzold/Systems Hungarian flavor. Think dwLength = strlen(lpszName) . I've read Making Wrong Code Look Wrong , and I understand the rationale for Apps Hungarian, where domain-type information is included in the variable names. But I don't understand the value in attatching the compiler type to the name. Why do programmers still persist on using this style of notation? Is it just inertia? Are there any benefits that outweigh the decreased readability? Do people just learn to ignore the decorators when reading the code, and if so, how do they continue to add value? EDIT:
A lot of answers are explaining the history, or why it is no longer relevant, both of which are covered in the article I cited. I'd really like to hear from anyone out there who still uses it. Why do you use it? Is it in your standard? Would you use it if it wasn't required? Would you use it on a new project? What do you see as the advantages? | At the moment I still use Hungarian for exactly three reasons , judiciously avoiding it for everything else: To be consistent with an existing code base when doing maintenance. For controls, eg. "txtFirstName". We often need to distinguish between (say) "firstName" the value and "firstName" the control. Hungarian provides a convenient way to do this. Of course, I could type "firstNameTextBox", but "txtFirstName" is just as easy to understand and is less characters. Moreover, using Hungarian means that controls of the same type are easy to find, and are often grouped by name in the IDE. When two variables hold the same value but differ by type. For example, "strValue" for the value actually typed by the user and "intValue" for the same value once it has been parsed as in integer. I certainly wouldn't want to set up my ideas as best practice, but I follow these rules because experience tells me that it occasional use of Hungarian benefits code maintainability but costs little. That said, I constantly review my own practice, so may well do something different as my ideas develop. Update: I've just read an insightful article ( archive mirror ) by Eric Lippert, explaining how Hungarian can help make wrong code look wrong. Well worth reading. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/14789",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/145/"
]
} |
14,831 | How do you go about teaching Exception Handling to Programmers. All other things are taught easily - Data Structures, ASP.NET, WinForms, WPF, WCF - you name it, everything can be taught easily. With Exception Handling, teaching them try-catch-finally is just the syntactic nature of Exception Handling. What should be taught however is - What part of your code do you put in the try block? What do you do in the catch block? Let me illustrate it with an example. You are working on a Windows Forms Project (a small utility) and you have designed it as below with 3 different projects. UILayer BusinessLayer DataLayer If an Exception (let us say of loading an XDocument throws an exception) is raised at DataLayer (the UILayer calls BusinessLayer which in turns calls the DataLayer), do you just do the following //In DataLayer
try {
XDocument xd_XmlDocument = XDocument.Load("systems.xml");
}
catch(Exception ex)
{
throw ex;
} which gets thrown again in the BusinessLayer and which is caught in UILayer where I write it to the log file? Is this how you go about Exception Handling? | To explain exception handling, explain the concept behind it: The code where an error occurs frequently does not know how to properly handle that error. The code that knows how to handle it properly could be the function that called that one, or it could be further up the call stack. When you write a routine that calls a routine that might throw an exception, if you know how to handle that error correctly, put the call in a try block and put the error-handling code in the catch block. If not, leave it alone and let something above you in the call stack handle the error. Saying "catch ex, throw ex" is not a good way to do exception handling, since it doesn't actually handle anything. Plus, depending on how the exception model in your language works, that can actually be harmful if it clears stack trace information that you could have used to debug the issue. Just let the exception propagate up the call stack until it hits a routine that knows how to handle it. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/14831",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/5070/"
]
} |
14,856 | "Best practices" are everywhere in our industry. A Google search on "coding best practices" turns up nearly 1.5 million results. The idea seems to bring comfort to many; just follow the instructions, and everything will turn out fine. When I read about a best practice - for example, I just read through several in Clean Code recently - I get nervous. Does this mean that I should always use this practice? Are there conditions attached? Are there situations where it might not be a good practice? How can I know for sure until I've learned more about the problem? Several of the practices mentioned in Clean Code did not sit right with me, but I'm honestly not sure if that's because they're potentially bad, or if that's just my personal bias talking. I do know that many prominent people in the tech industry seem to think that there are no best practices , so at least my nagging doubts place me in good company. The number of best practices I've read about are simply too numerous to list here or ask individual questions about, so I would like to phrase this as a general question: Which coding practices that are popularly labeled as "best practices" can be sub-optimal or even harmful under certain circumstances? What are those circumstances and why do they make the practice a poor one? I would prefer to hear about specific examples and experiences. | Might as well throw this into the ring: Premature optimization is the root of all evil. No, it's not. The complete quote: "We should forget about small
efficiencies, say about 97% of the
time: premature optimization is the
root of all evil. Yet we should not
pass up our opportunities in that
critical 3%." That means that you take advantage of specific, strategic performance enhancements throughout your design process. It means that you use data structures and algorithms that are consistent with performance objectives. It means that you are aware of design considerations that affect performance. But it also means that you do not frivolously optimize when doing so will give you minor gains at the expense of maintainability. Applications need to be well-architected, so that they don't fall down at the end when you apply a little load on them, and then you wind up rewriting them. The danger with the abbreviated quote is that, all too often, developers use it as an excuse to not think about performance at all until the end, when it might be too late to do anything about it. It's better to build in good performance from the start, provided you're not focusing on minutiae. Let's say you're building a real-time application on an embedded system. You choose Python as the programming language, because "Premature optimization is the root of all evil." Now I have nothing against Python, but it is an interpreted language. If processing power is limited, and a certain amount of work needs to get done in real-time or near real-time, and you choose a language that requires more processing power for the work than you have, you are royally screwed, because you now have to start over with a capable language. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/14856",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/1554/"
]
} |
14,914 | Other than title and pay, what is the difference? What different responsibilities do they have. How knowledgeable/experienced are they? What is the basic measure to determine where a developer fits into this basic structure? | This will vary but this is how I see it at a place large enough to have distinctions between types of programmers. I would say entry level and Junior are the same thing. They are just out of school and have less than two years of work experience. They are assigned the least complex tasks and should be supervised fairly closely. Generally they know about 10% of what they think they know. Usually they have not been through the whole development cycle and so often make some very naive choices if given the opportunity to choose. Sadly many of them don't actually care what the requirement is, they want to build things their way. They often have poor debugging skills. Intermediate level is where many programmers fall. They have more than two years experience and generally less than ten, although some can stay at this level their whole careers. They can produce working code with less supervision as long as they are assigned to relatively routine tasks. They are not generally tasked with high level design or highly complicated tasks that require an in-depth level of knowledge. They may be tasked with the design of a piece of the application though, especially as they are in the zone to become a senior developer. They are good at maintenance tasks or tasks where they can focus on just their piece of the puzzle, but are not usually expected to consider the application as a whole unless working with senior developers or being prepped for promotion to senior. They can usually do a decent job of troubleshooting and debugging, but they have to really slog through to get the hard ones. They do not yet have enough experience to see the patterns in the problems that point them to the probable place they are occurring. But they are gaining those skills and rarely need to ask for debugging help. They have probably been through the whole development cycle at least once and seen the results of design problems and are learning how to avoid them in the future. Usually they tend to be more likely to take a requirement at face value and not push it back when it has obvious problems or gaps. They have learned enough to know what they don't know and are starting to gain that knowledge. They are the workhorses of the programming world, they deliver probably 80-90% of the routine code and maybe 10% of the very difficult stuff. No one who is senior level even needs to ask this question. They are experts in their chosen technology stacks. They are given the hard tasks (the ones nobody knows how to solve) and often get design responsibilties. They often work independently because they have a proven track record of delivering the goods. They are expected to mentor Junior and intermediate developers. Often they are amazing troubleshooters. They have run into those same problems before and have a very good idea of where to look first. Seniors often mentor outside the workplace as well. They generally have at least ten years of experience and have almost always been on at least one death march and know exactly why some things are to be avoided. They know how to deliver a working product and meet a deadline. They know what corners can be cut and what corners should never be cut. They know at least one and often several languages at the expert level. They have seen a lot of "hot new technologies" hit the workplace and disappear, so they tend to be a bit more conservative about jumping on the bandwagon for the next exciting new development tool (but not completely resistant to change - those would be the older Intermediate developers who never make the leap to Senior). They understand their job is to deliver working software that does what the users want, not to play with fun tools. They are often pickier about where they will work because they can be and because they have seen first hand how bad some places can be. They seek out the places that have the most interesting tasks to do. Often they know more about their company's products than anyone else even if they have been there only a few months. They know they need more than programming knowledge and are good at getting knowledge about the business domain they support as well. They are often aware of issues that juniors never consider and intermediates often don't think about such as regulatory and legal issues in the business domain they support. They can and will push back a requirement because they know what the problems with it will be and can explain the same to the laymen. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/14914",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/1785/"
]
} |
14,968 | It's a fairly common adage that adding more programmers to a late project will make matters worse. Why is this? | Introduction overhead Each new developer has to be introduced to the code base and development process which takes not only the new person's time but also requires assistance from [a] senior developer[s] (guiding them through the build process, explain the testing process, help them with pitfalls in the code base, much more detailed code reviews, etc) . Therefore, the more new developers you add to the project the more time has to be spent to bring them to a point where they can actually contribute to the project. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/14968",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/5505/"
]
} |
14,975 | I've gotten a bug report from one of my users in a section of the software. The scenario is basically a databinding scenario where the user enters info, and that info is printed to pdf. The problem is, that the functionality: Is used frequently (about 40 times a week) Hasn't been updated/modified in months The area of code is relatively simple to walk through The validation appears fine (ie, if the information wasn't filled out in the app, validation fires indicating it with a msgbox before the pdf is generated) But this one user claims that in the past 2 weeks it's happened about 3 times out of 50 and I just can't reproduce it. So what do you do in the case of a heisenbug? | Add some logging to this users code. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/14975",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/1554/"
]
} |
15,004 | I recently had a programmer in for an interview, who listed Python, PHP, Rails and ASP as a few of their skills. In the interview however, they interviewee didn't enough know what control structures and basic logic were, they had only followed a few demo tutorials. So my question is this: At which point can you add a technology to your resume accurately . Is it when you can demonstrate all basic concepts, write a useful program in it, or are just comfortable using it without having to refer to the documentation every 30 seconds. I don't believe this is overly subjective, a baseline should easily be established based on feedback. | You should be able to defend/explain each and every word you put in your resume. Kind of like you dissertation/thesis. I have seen many candidates rejected with the reason "could not justify what he had put in his resume". One approach is to follow Google's self questionnaire. Rate each skill on a scale of 10. That way we can project how relatively comfortable we are with various technologies. 1 means you can read others code with plenty of googling. 5 maybe for implementing modules in the technology. Etc. 8 for plenty of experience and comfortable with designing and implementing large projects in that technology. 9 for architectural knowledge with moderate understanding of what's under the hood. 10 means you have written a book on it or invented it. I have seen resumes which have bar graphs indicating relative proficiency in various technology. Another option is to group skills as "strong understanding", "moderate proficiency" and "familiar with". Edit: I tried to put this as a comment, but didn't look due to lack of formatting. For a reference, here is what Google defines the rates in their Self Evaluation 0 – You have no experience 1 to 3 – You are familiar with this
area but would not be comfortable
implementing anything in it.- 4 to 6 – You are confident in this
area and use it daily.- 7 – 9 You are extremely proficient to
expert and have deep technical
expertise in the subject and feel
comfortable designing any project in
it.- 10 – Reserved for those who are
recognized industry experts, either
you wrote a book in it or invented
it. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/15004",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/3382/"
]
} |
15,241 | The saying "It's easier to beg forgiveness than ask permission" seems pretty popular among programmers and IIRC was attributed to Grace Hopper. In what situations is this typically true, and why do you believe that such a counterintuitive proposition would hold? | I think that one important reason is responsibility. By asking for permission, you are transfering the responsibility to the person you are asking, so that person might be inclined to deny just to avoid being held responsible for the result, in case of failure. On the other hand, once it's done, that's no longer a problem. Even if the result was a failure, it's still your responsibility, no matter if you get forgiveness or not. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/15241",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/1468/"
]
} |
15,269 | I feel that side effects are a natural phenomenon. But it is something like taboo in functional languages. What are the reasons? My question is specific to functional programming style. Not all programming languages/paradigms. | Writing your functions/methods without side effects - so they're pure functions - makes it easier to reason about the correctness of your program. It also makes it easy to compose those functions to create new behaviour. It also makes certain optimisations possible, where the compiler can for instance memoise the results of functions, or use Common Subexpression Elimination. Edit: at Benjol's request: Because a lot of your state's stored in the stack (data flow, not control flow, as Jonas has called it here ), you can parallelise or otherwise reorder the execution of those parts of your computation that are independent of each other. You can easily find those independent parts because one part doesn't provide inputs to the other. In environments with debuggers that let you roll back the stack and resume computing (like Smalltalk), having pure functions means that you can very easily see how a value changes, because the previous states are available for inspection. In a mutation-heavy calculation, unless you explicitly add do/undo actions to your structure or algorithm, you cannot see the history of the computation. (This ties back to the first paragraph: writing pure functions makes it easier to inspect the correctness of your program.) | {
"source": [
"https://softwareengineering.stackexchange.com/questions/15269",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/963/"
]
} |
15,286 | I come from a scientific and embedded programming background, and I have had minimal experience with web programming. What would be the best approach to take to get up to speed with web programming? Tools and framework suggestions? One approach would be to dive into learning a framework, such as Rails. I started doing this with rails tutorial, but I find that the framework abstracts so many important concepts that I should be learning. To sum up, experienced programmer wants to learn web-app programming. | Few years back I asked this question to myself! This is what I find easy and organized way to start web programming, you can skip steps which you have already know To learn web programming, first you have to know What is a website What are the main role players [Webserver, Mark-up languages, Client side scripting, Server side scripting, Protocols (http), Browsers Trace complete server round trip i.e. from typing google.com in browser and to loading the complete page. Why http is stateless? Role of session to overcome this? Start learning html & little JavaScript Basic tags Marquee :-) Alert, change color page background color via javascript etc Have some fun playing around with html, javscript and css Server side scripting Start with php Use all necessary input – type elements and create forms Validate form with plain javascript Retrieve submitted form data and display it via php I think you can complete 1 to 5 quickly. Its interesting part for all novice web programmers, because of the visual excitement they get while using html and css first time/ Then move to serious stuff!!! At this time, you know fundamental things of web programming and working of website. Now, it’s your responsibility to choose most appropriate language, platform and framework. No one here can’t help you with this; You have to consider your personal interests and future plans to decide. My recommendation is to go with php, since you learned it in initial stages. Next, is databases
a. Learn how to connect database
b. Basic sql queries. Select, insert, update and delete
c. Manipulate user inputs using database Now, start creating a personal website; or any simple website Download any open source website and learn from it. Here are few references, which may help you 1. W3 Schools – for learning basics of html, css, JavaScript, asp, database queries 2. Php.net – for everything about php 3. For exploring open source projects - http://bitbucket.org/ - http://github.com/ - http://www.codeplex.com/ - http://sourceforge.net/ Always remember that there are several peoples here for help you; if anything happen, post it in stackoverflow. Find someone with some amount of web programming experience to guide you; it’s always easy to learn from experienced programmers. Do not forget to find some books too… for a starter you can checkout dummies All the best!!! | {
"source": [
"https://softwareengineering.stackexchange.com/questions/15286",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/1388/"
]
} |
15,292 | For me I am C# and Java person. I find these 2 languages are quite similar so its easier for me to crossover each other. I'm not sure if its good idea to pick up other different languages. Any thoughts? | It's not about how many languages you learn. It's about learning new concepts. Some languages might be able to teach you many concepts, while others might be able to teach only one. If you know C#, you probably wouldn't learn anything terribly new by learning Java. But you would if you learned Haskell. So when you pick a new language to learn, pick something that will teach you concepts you don't already know. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/15292",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/6450/"
]
} |
15,321 | If you've just been introduced to a new project, what's the first thing you look for to get an idea of how it works? Do you look for the design first? If there is a design, what do you look for in it? Class diagrams or deployment diagrams or sequence diagrams or something else? Or do you go straight for the code? If so, how do you understand how the different layers interact? | I start with code. Separate design documents, if there are any, are as likely to be wrong or misconceived as not. So, i start by trying to trace some simple flow through the code; if it's a webapp, it could be a request or a sequence of requests, for instance. Once i've done that, i have a sort of skeleton to hang more understanding on. Then, i might go back and read designs or other documentation, but at that point, i have something concrete to relate them to, and to validate them with, so i can detect duff information. Or i might just carry on reading code, or test cases, etc. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/15321",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/55/"
]
} |
15,360 | I read this answer and found a comment insisting not to send password by email: passwords should not be able to be retrieved by email, I hate that. It means my password is stored in plain text somewhere. it should be reset only. This raises me the question of handling Forgot Password option? At any cost the raw password must be displayed in any UI so that user will be able to read it. So what would be the way to handle "Forgot Password" | A good application design will not be able to explicitly recover a users password. This is because it is usually stored after it is run through some sort of hash which is a one way operation. The best way to handle lost password is to perform a reset, email to the users account a link with a generated parameter tacked on that identifies this as a valid password reset for the account in question. At this point they can set a new password. This does assume you have a users email address on file. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/15360",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/156/"
]
} |
15,397 | Am I wrong if I think that Python is all I need to master, in order to solve most of the common programming tasks? EDIT I'm not OK with learning new programming languages if they don't teach me new concepts of programming and problem solving; hence the idea behind mastering a modern, fast evolving, with a rich set of class libraries, widely used and documented, and of course has a "friendly" learning curve programming language. I think that in the fast evolving tech industry, specialization is key to success. | You are right and wrong. Right: Knowing a single tool very well is very marketable and to be desired. And Python is good for OO, for scripts, for functional-ish programming, and it has excellent mathematical and scientific libraries. Wrong: Python doesn't teach you everything a good developer should know. Sometimes you will need JavaScript to provide some client-side functionality. Sometimes you need to understand what's happening at a more fundamental level, such as the C underneath the Python. And sometimes you need to learn to think in different ways, as you would with Haskell or Clojure. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/15397",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/6651/"
]
} |
15,405 | Do other people fix bugs when they see them, or do they wait until there's crashes/data loss/people die before fixing it? Example 1 Customer customer = null;
...
customer.Save(); The code is clearly wrong, and there's no way around it - it's calling a method on a null reference. It happens to not crash because Save happens to not access any instance data; so it's just like calling a static function. But any small change anywhere can suddenly cause broken code that doesn't crash: to start crashing. But , it's also not inconceivable that correcting the code: Customer customer = null;
...
customer = new Customer();
try
...
customer.Save();
...
finally
customer.Free();
end; might introduce a crash; one not discovered through unit tests with complete coverage, and manual user testing. Example 2 float speed = 0.5 * ((G * mass1 * mass2) / R) * Pow(time, 2); People knowing physics will recognize that it's supposed to be R 2 in the denominator. The code is wrong, it's absolutely wrong. And overestimating the speed will cause the retro-rockets to fire too soon, killing all the occupants of the spacecraft. But it's also possible perhaps having it over-estimate the speed is masking another issue: the air-bags can't deploy while the shuttle is moving too fast. If we suddenly fix the code: float speed = 0.5 * ((G * mass1 * mass2) / Pow(R, 2)) * Pow(time, 2); Now the speed is accurate, and suddenly airbags are deploying when they shouldn't. Example 3 Here's an example that i had recently, checking if a string contains invalid characters: if (StrPos(Address, "PO BOX") >= 0)
{
//Do something
} What if it turns out there's a bug in the Do something branch? Fixing the obviously incorrect code: if (StrPos("PO BOX", Address) >= 0)
{
//Do something
} Fixes the code, but introduces a bug. The way I see it there are two possibilities: fix the code, and get blamed for breaking it wait for the code to crash, and get blamed for having a bug What do you politically do? Example 4 - Today's real world bug I am constructing an object, but calling the wrong constructor: Customer customer = new Customer(); Turns out that the "parameterless" constructor is actually an parameterized constructor from further back in the inheritance chain: public Customer(SomeObjectThatNobodyShouldBeUsingDirectly thingy = null)
public Customer(InjectedDependancy depends) Calling it is a mistake, since it bypasses all the subsequent constructors. I could change the object's lineage to not expose such a dangerous constructor, but now I have to change the code to: Customer customer = new Customer(depends); But I can't guarantee that this change won't break anything. Like my Example 1 above, perhaps someone, somewhere, somehow, under some esoteric conditions, depends on the constructed Customer to be invalid and full of junk. Perhaps the Customer object, now that it is properly constructed will allow some code to run that previously never did, and now I can get a crash. I can't bet your wife's life on it. And I can test it from here to Tuesday, I can't swear on your daughter's life that I didn't introduce a regression. Do I: Fix the code and get blamed for breaking it? or Leave the bug, and get blamed when the customer finds it? | This depends wildly on the situation, the bug, the customer, and the company. There is always a trade-off to consider between correcting the implementation and potentially introducing new bugs. If I were to give a general guideline to determining what to do, I think it'd go something like this: Log the defect in tracking system of choice. Discuss with management/coworkers if needed. If it's a defect with potentially dire consequences (e.g. your example #2), run, scream, jump up and down till someone with authority notices and determine an appropriate course of action that will mitigate the risks associated with the bug fix. This may push your release date back, save lives, wash your windows, etc. If it's a non-breaking defect, or a workaround exists, evaluate whether the risk of fixing it outweighs the benefit of the fix. In some situations it'll be better to wait for the customer to bring it up, since then you know you aren't spending time fixing/retesting things when it's not 100% required. Mind you, this only applies when you're close to a release. If you're in full development mode, I'd just log the defect so it can be tracked, fix it, and call it done. If it's something that takes more than, say, half an hour to fix and verify, I'd go to the manager/team lead and see whether or not the defect should be fit into the current release cycle or scheduled for a later time. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/15405",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/6654/"
]
} |
15,468 | Python seems all the rage these days, and not undeservingly - for it is truly a language with which one almost enjoys being given a new problem to solve. But, as a wise man once said (calling him a wise man only because I've no idea as to who actually said it; not sure whether he was that wise at all), to really know a language one does not only know its syntax, design, etc., advantages but also its drawbacks. No language is perfect, some are just better than others. So, what would be in your opinion, objective drawbacks of Python. Note: I'm not asking for a language comparison here (i.e. C# is better than Python because ... yadda yadda yadda) - more of an objective (to some level) opinion which language features are badly designed, whether, what are maybe some you're missing in it and so on. If must use another language as a comparison, but only to illustrate a point which would be hard to elaborate on otherwise (i.e. for ease of understanding) | I use Python somewhat regularly, and overall I consider it to be a very good language. Nonetheless, no language is perfect. Here are the drawbacks in order of importance to me personally: It's slow. I mean really, really slow. A lot of times this doesn't matter, but it definitely means you'll need another language for those performance-critical bits. Nested functions kind of suck in that you can't modify variables in the outer scope. Edit: I still use Python 2 due to library support, and this design flaw irritates the heck out of me, but apparently it's fixed in Python 3 due to the nonlocal statement. Can't wait for the libs I use to be ported so this flaw can be sent to the ash heap of history for good. It's missing a few features that can be useful to library/generic code and IMHO are simplicity taken to unhealthy extremes. The most important ones I can think of are user-defined value types (I'm guessing these can be created with metaclass magic, but I've never tried), and ref function parameter. It's far from the metal. Need to write threading primitives or kernel code or something? Good luck. While I don't mind the lack of ability to catch semantic errors upfront as a tradeoff for the dynamism that Python offers, I wish there were a way to catch syntactic errors and silly things like mistyping variable names without having to actually run the code. The documentation isn't as good as languages like PHP and Java that have strong corporate backings. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/15468",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/2439/"
]
} |
15,515 | Often when I write a functions I want to make sure the inputs to it are valid in order to detect such errors as early as possible (I believe these are called preconditions). When a precondition fails, I've always thrown an exception. But I'm beginning to doubt whether this is the best practice and if not assertions would be more appropriate. So when should I do which: when is it appropriate to use an assertion and when is it appropriate to throw an exception? | Assertions should only be used to verify conditions that should be logically impossible to be false (read: sanity checks). These conditions should only be based on inputs generated by your own code. Any checks based on external inputs should use exceptions. A simple rule that I tend to follow is verifying private functions' arguments with asserts, and using exceptions for public/protected functions' arguments. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/15515",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/2210/"
]
} |
15,528 | I have a fully configured mantis bug tracker for tracking issues in apps that I create. When an user is disciplined and goes straight to mantis to write a issue report, he/she will have fastest response and everything regarding the issue will be very easy to track. However, not everyone is keen to do so. They report their problems via phone, e-mail, don't report them at all. What would be the best way to nudge them towards the using a bugtracker system? Clearly, they HAVE to see some immediate benefits so they can return and look for more benefits. EDIT: I am talking about support for the products that I sell as an ISV. | Your bug tracker is for your convenience, not your customers'. If you can't be bothered to take their phone or email issue and enter it yourself, how do you think they feel? You need to be able to enter issues and assign them manually to a client. Then when they call in with an issue you can say, "Thanks for reporting that! I'm going to enter it into our issue management system, and you'll start getting email (or whatever) as we deal with it. In the future, if it's easy for you, you can enter that sort of thing right there. Or feel free to just call me, that's fine too." One of the best such systems I've worked with as a customer is the one at the hosting provider I resell. Email to support@ gets parsed for a domain name in the subject line, assigned to a client account based on the from address, and auto-entered into their ticket system. Pretty slick. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/15528",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/1058/"
]
} |
15,623 | When it comes to "interview test" questions, the subject of FizzBuzz often comes up. There is also a Coding Horror post about it. Now, if you bother reading sites such as this, you are probably less likely to be in the demographic of programmers who would find FizzBuzz anything but trivial. But is it really true that 99% of programmers will struggle with it? Really? What is the evidence to back this up? Some real-life examples would be very helpful in answering this question. | 99%? No. A significant percentage? Yes. From my own direct experience of interviewing people I can testify to this one. It might seem insignificant to you but there are a lot of people in the programming field who have more or less faked their way through for years and apply on non-entry level positions and fail this one. Even if you CAN easily solve it, but you give me huge static about being asked to do such a menial task will count against you. Being on a team means having to sometimes do things you might not enjoy but are necessary. If right off the bat, before we've even started to work together you think it would be best to try and assert your special status of being above doing something I've asked you to do then it will act as a mark against you. I don't care necessarily how elegant your solution is (although that would be nice) but seeing you take a stab at it on a whiteboard and talking your way through it shows me that you're at least willing to take a stab at it. If you get indignant and say something along the lines of "I'm a problem solver, not a code monkey!" then you will be knocked down a peg. I have had interviewees just flat out refuse to even begin to attempt it. Just simply refuse. No. Uh uh. Won't do it. I ask one or two more polite questions, thank them for their time and close the interview off. I say this as a manager and as a developer. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/15623",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/4757/"
]
} |
15,636 | What should be in a good (read:useful) coding standard? Things the code should have. Things the code shouldn't have. Should the coding standard include definitions of things the language, compiler, or code formatter enforces? What about metrics like cyclomatic complexity, lines per file, etc? | A reasons for every requirement. This way, following the standard doesn't become some sort of cargo cult and people know that it's ok to change the standard if the reason for it no longer applies, or to violate the standard in specific cases where the reason clearly doesn't apply. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/15636",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/4422/"
]
} |
15,820 | Often when I hear about the switch statement, its put off as a way to replace long if...else chains. But it seems that when I use the switch statement I'm writing more code that I would be just writing if...else. You also have other issues like keeping all variables for all calls in the same scope . Here's some code that represents the flow I normally write ( thanks to diam ) String comment; // The generated insult.
int which = (int)(Math.random() * 3); // Result is 0, 1, or 2.
if (which == 0) {
comment = "You look so much better than usual.";
} else if (which == 1) {
comment = "Your work is up to its usual standards.";
} else if (which == 2) {
comment = "You're quite competent for so little experience.";
} else {
comment = "Oops -- something is wrong with this code.";
} Then they want me to replace that with this: String comment; // The generated insult.
int which = (int)(Math.random() * 3); // Result is 0, 1, or 2.
switch (which) {
case 0:
comment = "You look so much better than usual.";
break;
case 1:
comment = "Your work is up to its usual standards.";
break;
case 2:
comment = "You're quite competent for so little experience.";
break;
default:
comment = "Oops -- something is wrong with this code.";
} Seems like a lot more code in a much more awkward syntax. But is there really an advantage to using the switch statement? | For this particular situation, it seems to me that both if and case are poor choices. I'd use a simple array: String comments[] = {
"You look so much better than usual.",
"Your work is up to its usual standards.",
"You're quite competent for so little experience."
};
String comment = comments[(int)(Math.random() * 3)]; As a side note, you should generally compute the multiplier based on the size of the array rather than hard-coding the 3 . As to when you would use a case/switch, the difference from a cascade of if statements (or at least one major difference) is that switch can semi-automatically optimize based on the number and density of values, whereas a cascade of if statements leaves the compiler with little choice but to generate code as you've written it, testing one value after another until it finds a match. With only three real cases, that's hardly a concern, but with a sufficient number it can/could be significant. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/15820",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/66/"
]
} |
15,874 | Let's say one works in a hypothetical company that has several developers that rarely worked together on projects and the Boss didn't believe that code reviews are worth the time and cost. What are various arguments that could be presented in this scenario that will portray the benefit of code review? Furthermore, what are the potential arguments against code review here and how can these be countered? | If you have to justify yourself for such basic stuff, you have a bigger problem. You are the expert, your team should decide what practices you use. Maybe you should start to convince your boss of that very important principle. Your boss is supposed to decide WHAT to do and more importantly WHY doing it. You should take care of the HOW build it (that doesn't means you can't suggest what and why do things in your company of course). A great boss should encourage his employees to participate in enterprise strategy) However here is how I view peer code reviews: Because programming is a very intensive intellectual work, one person can't ensure everything is perfect. Therefore code review ensure that: vulnerabilities or bugs are found before the app is shipped constant mutual education between developers (almost for free) is achieved code respect standard for easier app maintenance code match the requirements Everyone is taking direct benefits of it: the developer that increases his/her knowledge and can pass his own to his/her team mates the customer/user that has less bugs and spend less in maintenance the boss that has more happy customers/users and spend less in trainings | {
"source": [
"https://softwareengineering.stackexchange.com/questions/15874",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/5148/"
]
} |
15,928 | Can any one explain about the agile methodology in simple sentences? | Agile is a lot of things and practices, but I think the core of it is just iterative development. Iterative: Think a bunch of very small waterfalls. That is, the waterfall method (requirements->spec->code->test), but instead of doing it over the course of a year or so, you do it over the course of a few weeks for a manageable chunk of the overall project. At the end of 'iteration/sprint/increment', you have a small but complete and tested set of additional functionality. This allows you to rapidly change the course of the project if it turns out that what you're doing isn't what the customer wants, or the business needs change, or whatever. Hence the term "agile." | {
"source": [
"https://softwareengineering.stackexchange.com/questions/15928",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/1373/"
]
} |
16,010 | I recently tried to implement a ranking algorithm, AllegSkill, to Python 3. Here's what the maths looks like: No, really. This is then what I wrote: t = (µw-µl)/c # those are used in
e = ε/c # multiple places.
σw_new = (σw**2 * (1 - (σw**2)/(c**2)*Wwin(t, e)) + γ**2)**.5 I actually thought it is unfortunate of Python 3 to not accept √ or ² as variable names. >>> √ = lambda x: x**.5
File "<stdin>", line 1
√ = lambda x: x**.5
^
SyntaxError: invalid character in identifier Am I out of my mind? Should I have resorted for a ASCII only version? Why? Wouldn't an ASCII only version of the above be harder to validate for equivalence with the formulas? Mind you, I understand some Unicode glyphs look very much like each other and some like ▄ (or is that ▗▖ ) or ╦ just can't make any sense in written code. However, this is hardly the case for Maths or arrow glyphs. Per request, the ASCII only version would be something along the lines of: winner_sigma_new = ( winner_sigma ** 2 *
( 1 -
( winner_sigma ** 2 -
general_uncertainty ** 2
) * Wwin(t,e)
) + dynamics ** 2
)**.5 ...per each step of the algorithm. | I feel strongly that just replacing σ with s or sigma doesn’t make sense and is counter-productive. What’s the potential gain? Well, let’s see … Does it improve readability? Nope, not in the slightest. If that were so, the original formula would have undoubtedly used Latin letters also. Does it improve writability? On the first glance, yes. But on the second, no. Because this formula is never going to change (well, “never”). There will normally be no need to change the code, nor to extend it using these variables. So writability is – just this once – not an issue. Personally, I think programming languages have one advantage over mathematical formulae: you can use meaningful, expressive identifiers. In mathematics, this isn’t normally the case, so we resort to one-letter variables, occasionally making them Greek. But Greek isn’t the problem. The non-descriptive, one-letter identifiers are. So either keep the original notation … after all, if the programming language does support Unicode in identifiers, so there’s no technical barrier. Or use meaningful identifiers. Don’t just replace Greek glyphs with Latin glyphs. Or Arabic ones, or Hindi ones. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/16010",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/236/"
]
} |
16,016 | What is the difference between update and upgrade in the context of application software? | Depends entirely on the installation technology, company developing the software and the whim of the person using the terms. Generally though, updates stay within a product version (for example, hotfixes), while if you want to move to a later version, you would upgrade . So you might install an update (hotfix) for Office 2007, or you might upgrade to Office 2010. This page gives the definition according to Windows Installer: http://msdn.microsoft.com/en-us/library/aa370579(v=VS.85).aspx | {
"source": [
"https://softwareengineering.stackexchange.com/questions/16016",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/943/"
]
} |
16,025 | When I started using an object-oriented language (Java), I pretty much just went "Cool" and started coding. I've never really thought about it until only recently after having read lots of questions about OOP. The general impression I get is that people struggle with it. Since I haven't thought of it as hard, and I wouldn't say I'm any genius, I'm thinking that I must have missed something or misunderstood it. Why is OOP difficult to understand? Is it difficult to understand? | I personally found the mechanics of OOP fairly easy to grasp. The hard part for me was the "why" of it. When I was first exposed to it, it seemed like a solution in search of a problem. Here are a few reasons why I think most people find it hard: IMHO teaching OO from the beginning is a terrible idea. Procedural coding is not a "bad habit" and is the right tool for some jobs. Individual methods in an OO program tend to be pretty procedural looking anyhow. Furthermore, before learning procedural programming well enough for its limitations to become visible, OO doesn't seem very useful to the student. Before you can really grasp OO, you need to know the basics of data structures and late binding/higher order functions. It's hard to grok polymorphism (which is basically passing around a pointer to data and a bunch of functions that operate on the data) if you don't even understand the concepts of structuring data instead of just using primitives and passing around higher order functions/pointers to functions. Design patterns should be taught as something fundamental to OO, not something more advanced. Design patterns help you to see the forest through the trees and give relatively concrete examples of where OO can simplify real problems, and you're going to want to learn them eventually anyhow. Furthermore, once you really get OO, most design patterns become obvious in hindsight. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/16025",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/2210/"
]
} |
16,070 | Is there a generally agreed upon definition for what a programming abstraction is, as used by programmers? [Note, programming abstraction is not to be confused with dictionary definitions for the word "abstraction."] Is there an unambiguous, or even mathematical definition? What are some clear examples of abstractions? | The answer to "Can you define what a programming abstraction is more or less mathematically?" is "no." Abstraction is not a mathematical concept. It would be like asking someone to explain the color of a lemon mathematically. If you want a good definition though: abstraction is the process of moving from a specific idea to a more general one. For example, take a look at your mouse. Is it wireless? What kind of sensor does it have? How many buttons? Is it ergonomic? How big is it? The answers to all of these questions can precisely describe your mouse, but regardless of what the answers are, it's still a mouse, because it's a pointing device with buttons. That's all it takes to be a mouse. "Silver Logitech MX518" is a concrete, specific item, and "mouse" is an abstraction of that. An important thing to think about is that there's no such concrete object as a "mouse", it's just an idea. The mouse on your desk is always something more specific - it's an Apple Magic Mouse or a Dell Optical Mouse or a Microsoft IntelliMouse - "mouse" is just an abstract concept. Abstraction can be layered and as fine- or coarse-grained as you like (an MX518 is a mouse, which is a pointing object, which is a computer peripheral, which is an object powered by electricity), can go as far as you want, and in virtually any direction you want (my mouse has a wire, meaning I could categorize it as an objects with a wire. It's also flat on the bottom, so I could categorize it as a kind of objects that won't roll when placed upright on an inclined plane). Object oriented programming is built on the concept of abstractions and families or groups of them. Good OOP means choosing good abstractions at the appropriate level of detail that make sense in the domain of your program and don't "leak". The former means that classifying a mouse as an object that won't roll on an inclined plane doesn't make sense for an application that inventories computer equipment, but it might make sense for a physics simulator. The latter means that you should try to avoid "boxing yourself in" to a hierarchy that doesn't make sense for some kind of objects. For example, in my hierarchy above, are we sure that all computer peripherals are powered by electricity? What about a stylus? If we want to group a stylus into the "peripheral" category, we'd have a problem, because it doesn't use electricity, and we defined computer peripherals as objects that use electricity. The circle-ellipse problem is the best known example of this conundrum. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/16070",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/5933/"
]
} |
16,135 | Just a random observation, it seems that on StackOverflow.com, there are questions about if "++i == i++". That question gets asked all the time though, I think I saw it asked about 6 or 7 times in the past 2 months. I just wonder why C developers are so interested in it? The same concept/question exists for C# and Java devs as well, but I think I saw only one C# related question. Is it because so many examples use ++i? Is it because there is some popular book or tutorial? Is it because C developers just love to cram as much as possible into a single line for 'efficiency'/'performance' and therefore encounter 'weird' constructs using the ++ operator more often? | I suspect that at least part of it is a bit simpler: even now, we see a lot of questions like this starting around the beginning of the school year, and they gradually taper off throughout the year. As such, I think it's fair to guess that quite a few of them are simply a result of classes in which the teacher talks at least a little about it, but doesn't explain his point(s) very well (as often as not because he doesn't really understand them himself). Especially based on the people who seem to ask these questions, few are based on actual coding. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/16135",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/6342/"
]
} |
16,165 | EDIT: This question at first seems to be bashing Java, and I guess at this point it is a bit. However, the bigger point I am trying to make is why any one single language is chosen as the one end all be all solution to all problems. Java happens to be the one that's used so that's the one I had to beat on here, but I'm not intentionality ripping Java a new one :) I don't like Java in most academic settings. I'm not saying the language itself is bad -- it has several extremely desirable aspects, most importantly the ability to run without recompilation on most any platform. Nothing wrong with using the language for Your Next App ^TM. (Not something I would personally do, but that's more because I have less experience with it, rather than it's design being poor) I think it is a waste that high level CS courses are taught using Java as a language. Too many of my co-students cannot program worth a damn, because they don't know how to work in a non-garbage-collected world. They don't fundamentally understand the machines they are programming for. When someone can work outside of a garbage collected world, they can work inside of one, but not vice versa. GC is a tool, not a crutch. But the way it is used to teach computer science students is a as a crutch. Computer science should not teach an entire suite of courses tailored to a single language. Students leave with the idea that all good design is idiomatic Java design, and that Object Oriented Design is the ONE TRUE WAY THAT IS THE ONLY WAY THINGS CAN BE DONE. Other languages, at least one of them not being a garbage collected language, should be used in teaching, in order to give the graduate a better understanding of the machines. It is an embarrassment that somebody with a PHD in CS from a respected institution cannot program their way out of a paper bag. What's worse, is that when I talk to those CS professors who actually do understand how things operate, they share feelings like this, that we're doing a disservice to our students by doing everything in Java. (Note that the above would be the same if I replaced it with any other language, generally using a single language is the problem, not Java itself) In total, I feel I can no longer respect any kind of degree at all -- when I can't see those around me able to program their way out of fizzbuzz problems. Why/how did it get to be this way? | This isn't a Java problem, it's a teaching problem. Not knowing how to program is not a languages fault, it's the students fault. Same goes for all your issues, GC, basic machine knowledge, how things work "under the hood" so to speak. Your major gripe about garbage collection throws me slightly. Unless you're doing C or C++ garbage collection is typically very good and not an issue. Would you rather they all learn assembly? Higher level languages that are strict are very useful for teaching. It gives you the flexibility of libraries, packages, and other niceties when you need it, without any of the confusing language "sugar" present in most other higher level languages (PHP, Ruby, Python, Perl). | {
"source": [
"https://softwareengineering.stackexchange.com/questions/16165",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/886/"
]
} |
16,179 | Do you think Object Oriented Programming is a solution to complexity. Why? This topic may be a bit controversial but my intentions to know the answer of Why from the experts here ! | There is no solution to complexity. In "The Mythical Man-Month", Fred Brooks discusses the difference between accidental and essential complexity in programming. Accidental complexity is caused by our tools and methods, such as having to write and test additional code in a language because we can't express our ideas directly, and things like that. New methods and techniques can reduce accidental complexity. I can write programs faster and better than I could twenty-five years ago, because I have better languages and tools. Essential complexity comes from the fact that what we try to do with programming is inherently complicated, and that there is an irreducible complexity. "Essential", in this context, means "relating to the essence of the thing" rather than "very necessary". Therefore, he claimed that there would be no silver bullet, that writing software would continue to be difficult. I strongly recommend that you read his book: specifically, I recommend the Silver Anniversary edition, with an additional essay "No Silver Bullet". In that, he reviews the proposed solutions to complexity and considers their impact. (What he finds the most effective is shrink-wrap software - write something complex once, and sell thousands or millions of copies.) Now, object-oriented programming helps, when done right, by creating abstractions and hiding away complexity. An object of a class has a certain defined behavior that we can reason from, without caring about the complexity of the implementation. Properly written classes have low coupling with each other, and divide-and-conquer is an excellent way to deal with complexity if you can get away with it. They also have high cohesion, in that they're a set of functions and data that relate very closely to each other. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/16179",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/6869/"
]
} |
16,189 | What is that feature according to you that has made object oriented programming so much successful ? Message Passing Inheritance Polymorphism Encapsulation Or some other feature that you may like to introduce. Also I would like to know that what is the connection between Abstract Data type and Object Oriented programming? | I'd suggest that the most important characteristic of object oriented programming is that of complexity management . The human brain can only hold so many concepts at one time - the oft quoted limit of remembering 7+/-2 independent items comes to mind. When I'm working on a 600kloc system at work, I can't hold the whole thing in my head at once. If I had to do that, I'd be limited to working on much smaller systems. Fortunately, I don't have to. The various design patterns and other structures that we've used on that project mean that I don't have to deal with the entire system at once - I can pick up individual pieces and work on them, knowing that they fit into the wider application in well defined ways. All of the important OO concepts provide ways to manage complexity. Encapsulation - let me deal with an external API that provides me with various services, without worrying how those services are implemented. Abstraction - let me concentrate on the essential characteristics and ignore what's not relevant. Composition - let me reuse components that have already been built in new combinations Polymorphism - let me ask for a service without worrying about how different objects might provide it in different ways. Inheritance - let me reuse an interface or an implementation, providing only the pieces that are different from what has gone before. Single Responsibility Principle - lets keep the purpose for each object clear and concise, so it's easy to reason about Liskov Substitution Prinicple - let's not lay traps for each other by introducing odd dependencies Open/Closed Principle - let's allow extension and modification in ways that don't require us to risk breaking existing code Dependency Injection - let's take composition to the next level and assemble the components together much later. Interface oriented development - let's take abstraction to the next level and only depend on the abstraction, never on a concrete implementation. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/16189",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/6869/"
]
} |
16,243 | Anyone who has used R# or CodeRush knows how fast you can put together simple constructs (and refactor complex ones) with a simple keyboard shortcut. However, do these productivity plugins cause a false evaluation of ability during interviews? Part of being a productive code writer (and making a good first impression in an interview) is writing good code - fast. If I had two candidates: Doesn't use plugins. She thinks about the problem, sits down at a stock IDE at the interview PC that looks exactly like hers and types out the code in a minute or two, as usual. Done. Pass. Uses plugins. He thinks about the problem, sits down at a stock IDE at the interview PC and realizes "fe + tab" no longer writes a foreach loop automatically, and all the shortcuts are gone. He then bumbles around the keyboard hitting his normal hotkeys and popping up strange windows and getting flustered. It takes him 3 minutes to write what normally would take 30 seconds. Done. Looked like they didn't know their way around the IDE at times. Must be new to this IDE and thus not had much experience with it or maybe the language. Pass, but a 'meh' mark beside their name. In your experience, how do you handle plugins during interviews as the interviewer or interviewee? What are the best practices to getting what the candidate really knows? There can be candidates who don't understand code, and use R# as a crutch. There can also be candidates who know the code in and out and use R# because it's just plain faster than the built in VS or Eclipse templates. Is it best to just not use an IDE at all? Let them bring their own PC? Others? | I was a candidate 2 in an interview very recently . I was given a vanilla install of the IDE on a PC with a non-standard keyboard and unfamiliar testing framework, and I was asked to write a simple Fizz-Buzz app with unit tests. I fluffed it. I must have looked like a complete noob, stumbling around in the dark trying to hack out code. Needless to say, I wasn't offered the position. What I learned is that I rely very heavily on my plugins. They don't just get code typed faster - they actually shape the way I think about code and the way I go about coding. For example, I used to think very carefully about variable names because they could be a pain to change after the fact. Now, in contrast, I just make a half-baked guess about how I'll use the variable, hack out some code, let the variable tell me what it is for, and then hit Refactor->Rename to call it something more appropriate. Does this make me the less capable candidate? In some ways, I think it does . Someone who can write code in Notepad and have it compile and run correctly has certain advantages over someone like me who needs all the IDE goodness he can get. From that point of view, I perfectly understand why any company would choose not to hire a toolhead like me. On the other hand , I'm still a talented and capable Senior Developer. I've learned what works for me, and I practice the sort of laziness that makes me productive, given my own weaknesses and limitations. In short, I'm the kind of programmer that could really benefit a company like the one who turned me away . Interestly, I had another interview a couple of weeks ago. Following my previous experience I made a point of asking about additional tools or budget for buying them. Discovering that there were neither gave me one more reason for turning down to the (rather generous) offer that they made to me . So, to paraphrase Groucho, " I would not join any company that would have someone like me for an employee. " Not unless they let me use ReSharper, anyway. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/16243",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/1521/"
]
} |
16,326 | I suck at estimates. When someone asks me how long something will take, I don't even dare to make a guess since I will be completely off the mark. Usually I'm way too optimistic, and should probably multiply my guess with some large X factor... How can I learn to make better estimates? It's not taught at my uni, and even though we have deadlines for all laborations I never think about how long something will actually take. Which I should. For everyone's sake (especially mine). | I'm still not great at it, but I have found that tracking how long you estimate for tasks and how long you actually take can be a big help. That way you can get a solid idea of how far off you usually are. Issue management software with time tracking (Jira in my case) or a spread sheet can be a big help with this. I think more than anything it's an experience thing. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/16326",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/2210/"
]
} |
16,365 | Apart from the obvious questions relating to specific project work someone is working on are there any questions I should be asking a fellow dev who is leaving the company? So far I am thinking about; Locations of things on the server he uses that not maybe everyone does. Credentials he has set up that we wouldn't have needed. Client details he has not yet saved into our CRM system. | A phone number and/or e-mail address. No matter what you ask him before he leaves, you will remember 10 more things to ask him as you see his car pulling out of the parking lot. Note: you are much more likely to get good information if he is leaving on good terms - try to make the transition as pleasant as possible (no matter why he is leaving). | {
"source": [
"https://softwareengineering.stackexchange.com/questions/16365",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/5010/"
]
} |
16,390 | Many tend to write "C/C++", as if they were the same thing. Although they share many similarities, they are clearly not the same. But what are really the fundamental differences between C and C++? Is C++ an enhanced version of C, or are there features in C which do not exist in C++? | The following points relate to C++: (user-defined) static type system: allows static checks about your data and their usage - points a lot of easily done errors in C. multi-"paradigm": allows working like in C, with object-oriented paradigms, with generic paradigms etc. Constructor/Destructor: the only way to say once what to do when creating or destroying something and be sure the user will not have to find the right function and use it like in C. RAII (badly named): you don't have to always manage memory. Just keep things in scope and use smart pointers describing your objects lifetime. Still can use raw pointers. Templates: better than macro, a real language to manipulate and generate types before the final compilation. Only lacks a type system (see Concepts in future C++ standards). Operator overloads: allows to describe operations in a simple syntactic manner and even to define embedded domain-specific languages inside your C++ code. Scoped names: namespaces, classes/struct, functions, etc. have simple rules to make sure names don't clash. Exception system: a way to propagate errors that is often better than return code. In fact, return code are good for domain-specific logical errors, because the application has to manage it. Exceptions are used for "hard" errors, things that make the following code just incorrect. It allows for catching errors higher in the call stack if possible, react to such exception (by logging or by fixing the state) and with RAII, if well used, it doesn't make the whole program wrong - if done well, again. The Standard Library: C has its own, but it's all "dynamic". The C++ standard library is almost (not IO streams) made of templates (containers and algorithms) that allows generating code only for what you use. Better: as the compiler has to generate code, it will know a lot about the context and will hapily apply a lot of optimizations without having to require the coder to obfuscate its code - thanks to templates and other things. const-correctness: The best way to make sure you don't change variables you shouldn't. Allows to specify read-only access to varaibles. And it is only checked at compile time so there is no runtime cost. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/16390",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/2210/"
]
} |
16,436 | You know the prospective company fairly well but are asked "Do you have any questions for us?". To show interest, what are some of your favorite questions to reply? | I like to ask one or two fairly casual questions about current " stuff " in the company. This might be a question about their newest product. Or it might be a question about an emerging market I know they're exploring. Or I might ask if they have a major release of product X coming up. (I mostly learned this from interviewees asking these types of questions to me.) I think one reason this works well is because people love to talk about what they're currently working on, or what's new and interesting in their company. It gives them a chance to feel knowledgeable and end the interview on a high note. Another reason I think it works well is it shows interest on your part in what the company's up to. And, depending on what all went on in the interview, it may give a sense that you did some research into the company beforehand (you did, didn't you?), and have a sort of unique interest in what the company does. Perhaps more than the above, I think it's good to not drop any "heavy" questions at this stage. My best interviews (giving and taking) have always wrapped up on a relatively light, conversational, friendly note. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/16436",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/6527/"
]
} |
16,528 | Which is better/more generally accepted? This: if(condition)
{
statement;
} Or: if(condition)
statement; I tend to prefer the first one, because I think it makes it easier to tell what actually belongs in the if block, it saves others from adding the braces later (or creating a bug by forgetting to), and it makes all your if statements uniform instead of some with braces and some without. The second one, however, is still syntactically correct and definitely more compact. I'm curious to see which is more generally preferred by others though. | The first is better because the second is error-prone. For example, let's say you are temporarily commenting out code to debug something: if(condition)
// statement;
otherStatement; Or adding code in a hurry: if(condition)
statement;
otherStatement; This is obviously bad. On the other hand, the first one does feel too verbose at times. Therefore, I prefer to just put everything on one line if it's sufficiently short and simple: if(condition) statement; This cuts down on syntactic noise while making the construct look like it does what it actually does, making it less error-prone. Provided that this syntax is only used for very simple, short conditions and statements, I find it perfectly readable. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/16528",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/5279/"
]
} |
16,571 | It seems like every .net book talks about value types vs reference types and makes it a point to (often incorrectly) state where each type is stored - the heap or the stack. Usually it's in the first few chapters and presented as some all-important fact. I think it's even covered on certification exams . Why does stack vs heap even matter to (beginner) .Net developers? You allocate stuff and it just works, right? | I'm becoming convinced that the primary reason this bit of information is considered important is tradition. In unmanaged environments, the disctinction between stack and heap is important and we have to manually allocate and delete the memory we use. Now, garbage collection takes care of the management, so they ignore that bit. I don't think the message has really gotten through that we don't have to care which type of memory is used either. As Fede pointed out, Eric Lippert has some very interesting things to say about this: https://docs.microsoft.com/en-us/archive/blogs/ericlippert/the-truth-about-value-types . In light of that information, you could adjust my first paragraph to basically read: "The reason people include this information and assume it is important is because of incorrect or incomplete information combined with needing this knowledge in the past." For those who think it is still important for performance reasons: What actions would you take to move something from the heap to the stack if you did measure things and find out that it mattered? More likely, you'd find a completely different way of improving performance for the problem area. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/16571",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/6980/"
]
} |
16,595 | I keep reading that it is recommended for a programmer to take frequent breaks while programming, and the usual recommendation I see is 5 minutes every half hour or 10 minutes every hour. I gave it a try, but quite often I find something interesting during those 5 minutes, and it takes me away from what I was working on for longer than I planned. Either that, or my mind gets focused on something else and I find it hard to get back into my work and don't focus very well. Is it really that beneficial to take frequent breaks while programming? Am I doing something wrong for it to be decreasing my productivity instead of increasing it? | I do take frequent breaks but they normally have a purpose (bathroom, food/coffee, etc). I tend to find that while I am away from my desk that I am still thinking of the problem at hand. However, this thinking is not distracted by the code in front of me and allows me to think more about the problem as a whole rather than nitpicking through details in front of me. Frequently when I return to my desk I have a new idea at how to approach the issue I am working on. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/16595",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/1130/"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.