source_id
int64 1
4.64M
| question
stringlengths 0
28.4k
| response
stringlengths 0
28.8k
| metadata
dict |
---|---|---|---|
116,177 | Consider conflict markers. i.e.: <<<<<<< branch
blah blah this
=======
blah blah that
>>>>>>> HEAD In the particular case which has motivated me to post this question, the team member responsible had just completed a merge from upstream to our branch, and had in some cases left these in, as comments, as a sort of documentation over what had just been resolved. He left it in a compiled state, tests passing, so it's not as bad as you would think. Instinctively though, I really took objection to this, however being devils advocate to myself I can see why he might have done it: because it highlights to other team developers what has changed as a result of the merge. because those who are more expert with the particular pieces of code can then address the concerns illustrated by the comments so that he doesn't have to guess. because the upstream merge is a right pain and it can be difficult to justify the time to resolve everything well and completely, so some semi-complete FIXME notice is necessary, so why not use the original conflict as a comment to document this. My objection was instinctive, but I'd like to be able to justify it rationally, or see my position more wisely. Can anyone give me some examples or even experiences where people have had a bad time with someone else doing this and/or reasons why it's bad practice (or you can play devil's advocate and support it). My own immediate concern was that it would clearly have been annoying if I had been editing one of the files concerned, pulled the changes, got real conflicts, but also pulled in the commented ones. Then I would have had a very messy file indeed. Fortunately that didn't happen. | This is clearly wrong. It is the job of the version control system to keep track of changes, and it is the job of diff tools to show what has changed as a result of the merge. There should be a comment in the commit log, and maybe in the code, explaining what was changed and why. However, IMHO, leaving the conflict markers in as comments is the same as leaving dead code around. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/116177",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/39362/"
]
} |
116,308 | Is there any view on whether using the #define to define full lines of code for simplifying coding is good or bad programming practice? For example, if I needed to print a bunch of words together, I'd get annoyed typing << " " << To insert a space between words in a cout statement. I could just do #define pSpace << " " << and type cout << word1 pSpace word2 << endl; To me this neither adds or subtracts from the clarity of the code and makes typing slightly easier. There are other cases that I can think of where typing will be much easier, usually for debugging. Any thoughts on this? EDIT: Thanks for all the great answers! This question just came to me after doing a lot of repetitive typing, but I never thought there would be other, less confusing macros to use. For those who don't want to read all the answers, the best alternative is to use the macros of your IDE to reduce repetitive typing. | Writing code is easy. Reading code is hard. You write code once. It lives for years, people read it a hundred times. Optimize code for reading, not for writing. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/116308",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/38478/"
]
} |
116,330 | Possible Duplicate: Why do most programming languages not nest block comments? Most languages I've worked with don't have support for recursive/nested comments. Is there any reason why language designers would choose not to implement this? Is it deceptively complex? Would it have undesired results? I know there could be issues such as: /*
string Str = "aslfkalfksnflkn/*aslkfasnflkn";
*/
Console.WriteLine("asdoadokn"); However, we could still get the same issue with non-recursive ones (consider */ in a string in a comment) so i don't feel that's a good argument against. | The problem is that recursive comments force you to actually parse the comments section, pushing it outside the scope of a normal lexer and possibly introducing more problems. As a refresher: A compiler usually has a number of distinct stages with different jobs, and the first stages are the lexer, which gets the input program and separates it into a sequence of tokens (each of which contains a keyword, and identifier or an operator), and the parser, which structures this sequence of tokens into an abstract syntax tree (AST). For the scope of a lexer, remember that lexing can normally be done by regular expressions. Bracket-like structures like recursive comments cannot be parsed by regular expressions (see context-free grammars), so the lexer would have to have much additional complexity, e.g. would need to be implemented via a recursive-descent parser. Additionally, for C and similar languages (who most famously used the /**/ comment syntax) the need never arose to comment out large chunks of code, since they had the pre-processor and unused chunks of code were just wrapped by #if 0
....
#endif which circumvented the parsing problem by delegating the problem to a second, much simpler compiler (the pre-processor). Summarizing: Because recursive comments would make compiling more complicated, it's usually disallowed, and only languages with C-style comments, but without a preprocessor, really need it. That Java is amongst them is unfortunate, of course. Edit: This doesn't mean recursive comments would be impossible or even very hard to do. You could use a recursive-descent lexer or could have a preprocessor before the lexer for filtering out comments. However, both approaches come with considerable cost compared to the standard model (use RE to auto-build the lexer and an EBNF to auto-build the parser), and the gain is pretty small. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/116330",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/24589/"
]
} |
116,346 | One of my friend was asked this interview question - "There is a constant flow of numbers coming in from some infinite list
of numbers out of which you need to maintain a datastructure as to
return the top 100 highest numbers at any given point of time. Assume all the numbers are whole numbers only." This is simple, you need to keep a sorted list in descending order and keep a track on lowest number in that list. If new number obtained is greater than that lowest number then you have to remove that lowest number and insert the new number in sorted list as required. Then question was extended - "Can you make sure that the Order for insertion should be O(1)? Is it possible?" As far as I knew, even if you add a new number to list and sort it again using any sort algorithm, it would by best be O(logn) for quicksort (I think). So my friend told it was not possible. But he was not convinced, he asked to maintain any other data structure rather than a list. I thought of balanced Binary tree, but even there you will not get the insertion with order of 1. So the same question I have too now. Wanted to know if there is any such data structure which can do insertion in Order of 1 for above problem or it is not possible at all. | Let's say k is the number of highest numbers you want to know (100 in your example).
Then, you can add a new number in O(k) which is also O(1) . Because O(k*g) = O(g) if k is not zero and constant . | {
"source": [
"https://softwareengineering.stackexchange.com/questions/116346",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/39456/"
]
} |
116,481 | Our code is bad. It might not have always been considered bad, but it is bad and is only going downhill. I started fresh out of college less than a year ago, and many of the things in our code puzzle me beyond belief. At first I figured that as the new guy I should keep my mouth shut until I learned a little more about our code base, but I've seen plenty to know that it's bad. Some of the highlights: We still use frames (try getting something out of a querystring, almost impossible) VBScript Source Safe We 'use' .NET - by that I mean we have .net wrappers that call COM DLLs making it almost impossible to debug easily Everything is basically one giant function Code is not maintainable. Each page has multiple files that are created every time a new page is made. The main page basically does Response.Write() a bunch of times to render the HTML (runat="server"? no way). After that there can be a lot of logic on the client side (VBScript), and finally the page submits to itself (often time storing many things in hidden fields) where it then posts to a processing page which can do things such as save the data to the database. The specifications we get are laughable. Often times they call for things like "auto-populate field X with either field Y or field Z" with no indication of when to choose field Y or field Z. I'm sure some of this is a result of not being employed at a software company, but I feel as if people writing software should at least care about the quality of their code. I can't even imagine that if I were to bring something up that anything would be done soon, as there is a large deadline looming, but we are continuing to write bad code and use bad practices. What can I do? How do I even bring these issues up? 75% of my team agree with me and have brought up these issues in the past, yet nothing gets changed. | Make sure that you're not overreacting. You are fresh, probably haven't worked a lot of (any?) other places, and so aren't prepared for the world of "real life code." Real life code is a horrible thing. It's like your nice school code and your obsessively tweaked personal project code had sex in the basement of a nuclear reactor and the baby grew up in a sewer of toxic waste; it's a hideous mutant. But assuming you're right, and the code is as bad as you say (i.e. worse than just the typically bad code), then you're right to be concerned. Talk to your team, and determine whether everyone else is on side. It will take work to improve the situation - if the rest of the team recognise the problem but don't care then you're wasting your time. Being a junior, you probably aren't in a position to lead. If you go to management yourself, as a new hire who is also a junior, your opinion will probably be disregarded. Get your lead developer or one of the most senior guys involved. Again, if none of the senior people are interested then you're wasting your time. Assuming you can get some senior technical people interested, I'd work towards identifying problem areas and possible solutions. For example, if "everything is basically one giant function" then next time you're working in that 'giant function' maybe you should refactor a little. Again, you need to get everyone onside. If you chip away at little pieces of your problem & improve piece by piece eventually they'll become much less of a problem. Every time you touch a piece of code consider whether it can be improved. You aren't going to sit down with management and say 'everything is bad and needs to be rewritten'. That makes no sense for them - costs a lot and is potentially very risky. Instead they should be made aware that there are problems, and that there's a plan to slowly improve as changes are made. They should be educated on the benefits of maintainable code. This should come from a senior person that they trust technically and professionally - not from you. Complete rewrite? Almost always a bad idea. Ultimately there's not much you can do because you're new. If no one wants to improve things, then you gather your experience and move to the next place. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/116481",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/19667/"
]
} |
116,531 | There are a number of questions on this site that give plenty of information about the benefits that can be gained from automated testing. But I didn't see anything that represented the other side of the coin: what are the disadvantages? Everything in life is a tradeoff and there are no silver bullets, so surely there must be some valid reasons not to do automated testing. What are they? Here's a few that I've come up with: Requires more initial developer time for a given feature Requires a higher skill level of team members Increase tooling needs (test runners, frameworks, etc.) Complex analysis required when a failed test in encountered - is this test obsolete due to my change or is it telling me I made a mistake? Edit I should say that I am a huge proponent of automated testing, and I'm not looking to be convinced to do it. I'm looking to understand what the disadvantages are so when I go to my company to make a case for it I don't look like I'm throwing around the next imaginary silver bullet. Also, I'm explicity not looking for someone to dispute my examples above. I am taking as true that there must be some disadvantages (everything has trade-offs) and I want to understand what those are. | You pretty much nailed the most important ones. I have a few minor additions, plus the disadvantage of tests actually succeeding - when you don't really want them to (see below). Development time: With test-driven development this is already calculated in for unit tests, but you still need integration and system tests, which may need automation code as well. Code written once is usually tested on several later stages. Skill level: of course, the tools have to be supported. But it's not only your own team. In larger project you may have a separate testing team that writes tests for checking the interfaces between your team's product and other's. So many more people have to have more knowledge. Tooling needs: you're spot on there. Not much to add to this. Failed tests: This is the real bugger (for me anyways). There's a bunch of different reasons, each of which can be seen as a disadvantage. And the biggest disadvantage is the time required to decide which of these reasons actually applies to your failed test. failed, because of an actual bug. (just for completeness, as this is of course advantageous) failed, because your test code has been written with a traditional bug. failed, because your test code has been written for an older version of your product and is no longer compatible failed, because the requirements have changed and the tested behavior is now deemed 'correct' Non-failed tests: These are a disadvantage too and can be quite bad. It happens mostly, when you change things and comes close to what Adam answered. If you change something in your product's code, but the test doesn't account for it at all, then it gives you this "false sense of security". An important aspect of non-failed tests is that a change of requirements can lead earlier behavior to become invalid. If you have decent traceability, the requirement change should be able to be matched to your testcode and you know you can no longer trust that test. Of course, maintaining this traceability is yet another disadvantages. And if you don't, you end up with a test that does not fail, but actually verifies that your product works wrongly . Somewhere down the road this will hit you.. usually when/where you least expect it. Additional deployment costs: You do not just run unit-tests as a developer on your own machine. With automated tests, you want to execute them on commits from others at some central place to find out when someone broke your work. This is nice, but also needs to be set up and maintained. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/116531",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/3124/"
]
} |
116,542 | Is there a duration limit for FDD Project when those projects are likely to fail due external and internal factors? For example XP projects should kept short due the openess of the concept and dependents on the individuals in the team and lack of documentation which is vulnerable to disturbances like losing team members and so forth. | You pretty much nailed the most important ones. I have a few minor additions, plus the disadvantage of tests actually succeeding - when you don't really want them to (see below). Development time: With test-driven development this is already calculated in for unit tests, but you still need integration and system tests, which may need automation code as well. Code written once is usually tested on several later stages. Skill level: of course, the tools have to be supported. But it's not only your own team. In larger project you may have a separate testing team that writes tests for checking the interfaces between your team's product and other's. So many more people have to have more knowledge. Tooling needs: you're spot on there. Not much to add to this. Failed tests: This is the real bugger (for me anyways). There's a bunch of different reasons, each of which can be seen as a disadvantage. And the biggest disadvantage is the time required to decide which of these reasons actually applies to your failed test. failed, because of an actual bug. (just for completeness, as this is of course advantageous) failed, because your test code has been written with a traditional bug. failed, because your test code has been written for an older version of your product and is no longer compatible failed, because the requirements have changed and the tested behavior is now deemed 'correct' Non-failed tests: These are a disadvantage too and can be quite bad. It happens mostly, when you change things and comes close to what Adam answered. If you change something in your product's code, but the test doesn't account for it at all, then it gives you this "false sense of security". An important aspect of non-failed tests is that a change of requirements can lead earlier behavior to become invalid. If you have decent traceability, the requirement change should be able to be matched to your testcode and you know you can no longer trust that test. Of course, maintaining this traceability is yet another disadvantages. And if you don't, you end up with a test that does not fail, but actually verifies that your product works wrongly . Somewhere down the road this will hit you.. usually when/where you least expect it. Additional deployment costs: You do not just run unit-tests as a developer on your own machine. With automated tests, you want to execute them on commits from others at some central place to find out when someone broke your work. This is nice, but also needs to be set up and maintained. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/116542",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/38994/"
]
} |
116,614 | A Red/Black Tree is one way to implement a balanced binary search tree. The principles behind how it works make sense to me, but the chosen colors don't. Why red and black, as opposed to any other pair of colors or of attributes in general? When I hear "red and black," the first things that come to mind are checkerboards and Les Misérables, neither of which seems particularly applicable in this context. | EDIT : Answer from Professor Guibas: from Leonidas Guibas [email protected] to
of the "Red-Black" term mailed-by cs.stanford.edu hide details 16:16
(0 minutes ago) we had red and black pens for drawing the trees. I believe the term first appeared in "A dichromatic framework for balanced trees" from Leonidas J. Guibas and Robert Sedgewick in 1978. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/116614",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/935/"
]
} |
116,678 | I have been browsing some OpenJDK code recently and have found some intriguing pieces of code there that has to do with bit-wise operations . I even asked a question about it on StackOverflow. Another example that illustrates the point: 1141 public static int bitCount(int i) {
1142 // HD, Figure 5-2
1143 i = i - ((i >>> 1) & 0x55555555);
1144 i = (i & 0x33333333) + ((i >>> 2) & 0x33333333);
1145 i = (i + (i >>> 4)) & 0x0f0f0f0f;
1146 i = i + (i >>> 8);
1147 i = i + (i >>> 16);
1148 return i & 0x3f;
1149 } This code can be found in the Integer class. I cannot help but feel stupid when I look at this. Did I miss a class or two in college or is this not something I am supposed to just get ? I can do simple bit-wise operations (like ANDing, ORing, XORing, shifting), but come on, how does someone come up with a code like that above? How good does a well-rounded programmer need to be with bit-wise operations? On a side note... What worries me is that the person who answered my question on StackOverflow answered it in a matter of minutes. If he could do that, why did I just stare like deer in the headlights? | I would say that as a well-rounded developer, you need to understand the operators and bitwise operations. So, at a minimum, you should be able to figure out the code above after a bit of thinking. Bitwise operations tend to be rather low level, so if you work on websites and LOB software, you are unlikely to use them much. Like other things, if you don't use them much, you wouldn't be conversant in them. So, you shouldn't worry about someone being able to figure it out very quickly, as they (probably) work with this kind of code a lot. Possibly writing OS code, driver code or other tricky bit manipulation. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/116678",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/16992/"
]
} |
116,715 | I've been a software developer (whether part time or full time) for almost 3 years. I've always been the type of person to have that knack for taking a lead on things, and providing the organization around getting things done. Ever since I was the lead on my senior design project back in college, I felt that this was my true calling, not sitting behind a desk coding. Now, I know I need to understand how to code for other developers to truly respect me. Also, I really do love coding. I work on plenty of side projects at home outside of work, keep up with best coding practices, and try and continuously further my knowledge of the domain. My main question is, what type of things or opportunities should I be looking out for that will help me progress my career to a more managerial role, rather than a coding role. Like I said, I love to code, however I'd love even more to be able to design things at a high level, and organize the team in such a way to get things done, and monitor their progress, while helping out with technical decisions here and there. These types of things truly make me happy, as opposed to just sitting behind a desk coding all day long. Obviously one of my main dreams is producing some sort of software on my own that would eventually blow up and make it big, and then beginning to hire a team and do it all myself, but I feel like the chances of that happening are far worse than just altering my career path a little to get where I want to go. I feel like I can garner that same satisfaction doing it for an employer rather than myself. Even though I haven't felt that way before, I feel like it has been mainly because I'm not doing what I TRULY want to do. Any tips, pointers, or things to keep in mind? Anyone that has done just this, and if so, how did you do it? | Moving from a software development role into a managerial or leadership role is something that takes time. I majored in software engineering, emphasizing software engineering process, and minored in business management and communication. Even with that academic experience on how to manage software projects, how to recruit and hire, how to lead teams, and how to communicate with groups verbally and in writing, I found that most managerial and leadership roles, especially in the industry that I wanted to work in, require 5+ years of experience in software engineering (I had 2, including co-ops and internships). In the mean time, I simply continued my studies on project management topics. The first thing I would recommend is becoming a good communicator and negotiator. Learn how to have the conversations that matter. Even as a developer, there are decisions that have to be made, with coworkers, clients, and users. Sometimes you have to have the difficult conversations and reach an agreement that benefits everyone. It's not an easy goal, but the book Difficult Conversations: How to Discuss What Matters Most is one that I would recommend that covers this. There are others, like Getting Past No and Getting to Yes: Negotiating Agreement Without Giving In that would also be helpful. These are relevant regardless of what position you're in. On the more technical side, an understanding of the software development lifecycle is important for leading and managing software teams. Leadership positions probably mean that you are involved with requirements engineering, software system architecture, design, implementation, testing and quality assurance, and maintenance tasks. Although you can't be an expert at all of these, a manager or leader has to at least understand all of them. As a developer, you probably do most of your work in design, implementation, and maintenance, with some testing as well. I would very much recommend books such as Software Requirements (and it's companion, More About Software Requirements ), Software Architecture in Practice (although my university switched to Software Systems Architecture: Working With Stakeholders Using Viewpoints and Perspectives after I took the architectures course, and it has been recommended to me), and Metrics and Models in Software Quality Engineering . From a project management perspective, you can learn about process models and methodologies. There are agile methods, such as Scrum and Extreme Programming and plan-driven methods such as Waterfall and Spiral. There are also methodology frameworks, such as CMMI and the Personal Software Process/Team Software Process. The ones that are relevant to you depend on where you work, in terms of the industry and company. There are a number of books on various methodologies and frameworks, but I would highly recommend Rapid Development: Taming Wild Software Schedules for general software engineering management and software engineering process. If you wanted to continue your education, you can look at more of a technical management track versus more of a business management track. If you wanted a technical management position, look at software engineering, software engineering management, and engineering management programs. For more of a business management track, you can consider MBA programs, business management, or some engineering management programs that have a strong economics or financial component. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/116715",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/-1/"
]
} |
116,730 | I have diagnosed ADD. Mild but enough to affect my work: Easily distracted Can't concentrate on one project at a time Addicted to the web Procrastination etc. What strategies do you use to compensate? One clarification I have real ADD. I was diagnosed with it when I was a child and have wrestled with it all my life. I am not talking about artificial ADD, which is induced by media overload. Update I just read this description ADD/ADHD. It's a great description, especially for us programming ADDers: I am like a toolbox, with all the tools I will ever need lain gently and neatly in the box, ready for me to use them. The toolbox is translucent so I can see them there. The key to the toolbox is locked inside of it. | I've had similar problems as you do. The two main strategies that have helped me are Only one project at any time: I've suffered from following more projects than I can count on my fingers, each "clamouring" for attention. Now I've radically cut down on projects either by finishing them "once and for all" or by simply dropping them altogether. Earlier this year I've founded a company and now I'm down to three projects: Health, Family and Company. Separation of concerns: When doing everything on one desk, the risk is high to "drift" from one thing to another. I've removed all procrastination stuff from my work PC and use my Laptop only for "play" and other private internet usage (mails, userfriendly , slashdot ). The PC is on my desk, the Laptop in the Living Room. This keeps a healthy distance between Company and private stuff. Of course these two things are quite general stuff. Some of the smaller, but also helpful things: No Lurking on IRC/other chat channels. Either I need or give support/community in the project I'm working on or I'm not in that channel. Close The Mailer. Checking mails because the project just compiles is just stupid, since waiting for a compile is just enough time to see whether or not there is mail. If there wasn't any mail, I've interrupted my flow for nothing and if there was mail, I'd either have to interrupt my flow even more to handle it or punt it anyways. So now, I'm checking my mails three times a day and have reduced my interruption count significantly. Exercise. Often while programming I feel the urge to jump up and run around in my room. Especially when sitting before the tougher design decisions. Going biking every other day has significantly improved my ability to concentrate on stuff as well as the added benefit of improving overall stamina and well being. Spent Time Bookkeeping. I've got a simple spreadsheet where I enter my Company time and some private stuff. I keep it to 15 minute chunks, which makes data entry much easier and any smaller units just cause more overhead. If I'm not doing something I can "bill" on the Company and it's between 8:00 and 18:00 I know I'm doing something wrong. Also, at the end of the week I get a nice report how I spent my time. One big caveat here though. When I started this after finishing university it was a hard blow for me how little time I was spending "productively." It took me quite a while to recognize, that I need to record everything I don't do for Family. Specifically: I need to record times spent exercising as productive. See above. I need to record times lost due to external factors: I'm travelling a lot lately and when I've only recorded 25 hours of work in a week, I suck. But if I add the two days I spent on the road that week, I see that I did more than 40 hours. Suddenly "I suck" changes into "the external-factors-that-cause-my-travels suck," which is a much healthier thing to say. Eat and Sleep Regularly. Stand up at 07:00, Breakfast, Lunch at 12:00, Dinner at 18:00, Sleep from between 22:00 and 23:00. Appreciate the Small Successes. Even if I'm not yet there, today is better than yesterday and tomorrow will be better than today. Adjust you Environment. That's quite a broad topic. As a home worker, I got myself a nice new desk and chair which I now use exclusively for work. Also I really like listening to music, but vocals -- especially in my mother tongue -- distract me incredibly. I've tried instrumental music, which worked for a while until the trance beats got to my nerves. Now I'm going for the complete silence. It might be different for you, but there's only one way to find out for real: experiment and watch yourself while working. Become Accountable. Get a Conscience. I founded my Company together with an old friend, whom I deeply respect. By his presence and by knowing that our success is now is interlocked, I feel compelled to give my best. And finally Constant Vigilance! Distractions tend to creep up from every nook and cranny of your life (stackoverflow anybody? ;). Keeping them at bay and managing them will stay a constant struggle. Having said this, I have to close my stackoverflow tabs and get back to programming! PS: I've talked with someone from my family who is working with ADHD kids. She told me that ADHD is a kind of catch-all/fallback diagnostic (see the ADHD Wikipedia entry for corrobation : DSM-IV V.) and is hard to diagnose "scientifically" since the patient has to be monitored in different settings over a longer period of time AND other causes for the symptoms have to be excluded. Practically ADHD is handled as "the condition helped by the prescribed medicines", since there currently are no globally accepted non- psychiatric assesment test procedures and not enough knowledge about the underlying biochemical functions. Again, quoting Wikipedia : "There are several effective and clinically proven options to treat people with ADHD. Combined medical management and behavioral treatment is the most effective ADHD management strategy, followed by medication alone, and then behavioral treatment." From what I gathered from the discussion with her, the problem is that doctors often choose (cheap, symptom-oriented) medication over (expensive, cause-oriented) therapy with little regard to the long-term effects on the patient. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/116730",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/3228/"
]
} |
116,806 | A bit of context: I'm in 3rd year of college. students are divided into teams of 4. Almost everyone will be working under windows (except a few like me who are on linux). As part of the school curriculum, we will soon start working on a project for a real client, but me and another team are wondering which way would be best for sharing our code with each other. I've been working part-time for 3 years and have had plenty of experience using both git and mercurial on different projects, so I don't have any problems using one system or the other. However, none of my teammates have ever used a version control system before. There's also another team who've tried using SVN but have had major problems and would prefer trying out something else, so they've asked for my opinion. My thoughts : I've heard that mercurial + TortoiseHg has a better integration under windows, but I'm wondering if the concept of anonymous heads might confuse them even If I explain them. On the other hand, I find that git branches are easier for a beginner to understand (clear seperation of work for each programmer) but doesn't work as well under windows. | Mercurial, without a doubt. This is of course, subjective, and vaires from one person to another, but the general opinion is that Mercurial is easier to grok, expecially to someone new to VCS or someone coming from one of the old generation VCS's. Points in hand: Mercurial was developed with Windows in mind ; Git was ported. No matter what anyone tried to tell me, Git is still a second rate citizen on Windows. Things have certanly improved over the past few years, but you still see a lot of bickering of why something works/or doesn't work on Windows as it did on *nix box. TortoiseHg works very nicely, and Visual Studio implementations are even easier to use without breaking your workflow. If someone starts the discussion "Git is more powerful after you ...", then he's pretty much saying it all. For 95% of users Mercurial seems more intuitive. Starting from a lack of index, to its more intuitive options (option switches are coherent), to its local numbers for commits instead of SHA1 hashes (who ever thought SHA1 hashes are user friendly??!) Mercurial's branches are no less powerful than Git's. Post describing some of the differences. Going back to some previous revision is as simple as "update old-revision". Starting from there; just do a new commit. Named branches are also available if one wishes to use them. Now, I know that all these are details, and they can be learned and everything, but the thing is ... these things add up. And in the end, one seems simpler and more intuitive than another. And version control system should not be something one spends time learning - you're there to learn programming and then to programm; VCS is a tool. Just to note; I use Git and Mercurial daily. Don't have trouble using either one of them. But when someone asks me for a recommendation, I always recommend Mercurial. Still remember, when it first came into my hands, it felt very intuitive. In my experience, Mercurial just produces less WTF/minute than Git. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/116806",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/39597/"
]
} |
116,863 | As a result of the comment discussion here , I wonder whether you can learn Functional Programming in C? | Obviously you can do functional programming in C. In theory, you can also learn functional programming principles in C, but the language doesn't make it easy. I assume you have at least a bit of a background in OOP; if you do, you should be aware that OOP can be done in C, including polymorphism, getters/setters, visibility rules, etc. etc., but it's fairly painful to do so, and you need to know both OOP and C inside-out to pull it off. It's much the same with FP. What you should be doing is first learn a functional programming language (most of them have surprisingly simple syntax rules; it's not the syntax that makes them hard to learn), and then let your newly-acquired wisdom influence the way you write C. As per request, a few things you can learn from FP and then apply in, say, C, C++ or Java: Avoid state, especially shared mutable state Appreciate pure functions Appreciate lazy evaluation Being able to model in terms of verbs rather than nouns Being able to approach problems recursively as well as iteratively Using higher-order functions Thinking of functions as just another kind of value, that you can pass around and combine with other values Using first-class functions as an alternative for object-based polymorphism | {
"source": [
"https://softwareengineering.stackexchange.com/questions/116863",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/1512/"
]
} |
117,092 | By researching around (books, Wikipedia, similar questions on SE, etc) I came to understand that Imperative programming is one of the major programming paradigms, where you describe a series of commands (or statements) for the computer to execute (so you pretty much order it to take specific actions, hence the name "imperative"). So far so good. Procedural programming, on the other hand, is a specific type (or subset) of Imperative programming, where you use procedures (i.e., functions) to describe the commands the computer should perform. First question : Is there an Imperative programming language which is not procedural? In other words, can you have Imperative programming without procedures? Update : This first question seems to be answered. A language CAN be imperative without being procedural or structured. An example is pure Assembly language. Then you also have Structured programming, which seems to be another type (or subset) of Imperative programming, which emerged to remove the reliance on the GOTO statement. Second question : What is the difference between procedural and structured programming? Can you have one without the other, and vice-versa? Can we say procedural programming is a subset of structured programming, as in the image? | Many of the terms can be reused (often misused) about programming languages, especially those other than object oriented ones. Here are some small descriptions of the terms. Imperative programming -
In good old days, when programming was broadly in assembly, code would have tons of GOTOs. Even higher level languages like FORTRAN and BASIC began using the same primitives. In this programming paradigm, the entire program is a single algorithm or complete functionality written linearly - step-by-step. This is imperative style . Understand that one can really write totally bad imperative work even in modern C language as well but it is quite easy to organize code in higher level languages. Structured and Modular programming -
Most often we should be able to use the term interchangeably but with subtle differences. When higher level languages begun to get richer, one realized that all units of work should be broken into smaller tractable parts - that is when functions came into existence and programming became a hierarchy of functions and many at lower level could be re-used. Structured programming is any programming when functionality is divided into units like for loop, while loop, if... then etc block structure. Also, here the a piece of code (function) can be re-used. In modular programming , one can create a physical form of package - i.e. a chunk of code that can be shipped; which are fairly general purpose and re-usable. This is called modules of elements compiled together. So one can hardly see modular programs which are not structured and vice versa; the technical definition is subtly different but mostly structured code can be made modular and other way. Then came "object oriented programming" which is well defined in literature. Understand that object oriented programming is a form of structured programming by definition. The new name for all those function based code which is structured code but NOT object oriented is often called as Procedural programming. So basically structured code where functions (or procedures) dominate over data is called procedural whereas class and object based representation is called object oriented. Both by definition are also modular. Many people think - all of structured programming (maybe skipping Object based) as imperative programming; I guess this is only due to lack of clear definition of imperative programming - but it is wrong. You are doing structured programming when you are not doing much imperative ones! But I can still write a lot of functions as well as a lot of goto statements inside C or FORTRAN program to mixup. To be specific to your questions: First Question :
Pure assembly language is imperative language which is NOT structured or procedural. (Having step by step interpretive control flow doesn't mean procedural - but division of functionality into functions is what makes a language procedural). correction * Most modern forms of assembly DO support the use of functions. In fact, everything that's possible in high level code HAS to exist low level to work. Although it's a far better practice to create procedural code, it's possible to write both procedural and imperative code. Unlike the latter, it's more maintainable and easier to understand (avoiding horrible spaghetti code). I think there are shell/bash scripts that better fit the accolade of being purely imperative, but even then, most have functions, developers definitely understand how much value they have. Second Question :
Procedural programming is a FORM of structured programming. BONUS According to some taxonomy the primary classification is Declarative (or functional language) vs. Imperative. Declarative languages allow computation without describing its control flow whereas imperative is where explicit control flow (step-by-step) is defined. Based on this classification, Imperative programing, for some can be a super-set of structured, modular and OO programming. See this: Functional Programming vs. OOP After Object oriented, there have been other programming paradigms invented:
See here for more details: What are the differences between aspect-oriented, subject-oriented, and role-oriented programming? | {
"source": [
"https://softwareengineering.stackexchange.com/questions/117092",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/34340/"
]
} |
117,119 | I have heard several times that every programmer should learn one of each type of language. Now, this is not necessarily true, but I believe it is a good idea. I've learned a Procedural Language (Perl), but what are the other types? What are the differences between them and what are some examples of each? | Even though terminology is far from standardized, a common way to is categorize major programming paradigms into Procedural Functional Logical Object-Oriented Generic You seem to already know what procedural programming is like. In functional languages functions are treated as first-class objects. In other words, you can pass a function as an argument to another function, or a function may return another function. Functional paradigm is based on lambda calculus, and examples of functional languages are LISP, Scheme, and Haskel. Interestingly, JavaScript also supports functional programming. In logical programming you define predicates which describe relationships between entities, such as president(Obama, USA) or president(Medvedev, Russia) . These predicates can get very complicated and involve variables, not just literal values. Once you have specified all your predicates, you can ask questions of your system, and get logically consistent answers. The big idea in logical programming is that instead of telling the computer how to calculate things, you tell it what things are. Example: PROLOG. Object-oriented paradigm is in some ways an extension of procedural programming. In procedural programming you have your data, which can be primitive types, like integers and floats, compound types, like arrays or lists, and user-defined types, like structures. You also have your procedures, that operate on the data. In contrast, in OO you have objects, which include both data and procedures. This lets you have nice things like encapsulation, inheritance, and polymorphism. Examples: Smalltalk, C++, Java, C#. Generic programming was first introduced in Ada in 1983, and became widespread after the introduction of templates in C++. This is the idea that you can write code without specifying actual data types that it operates on, and have the compiler figure it out. For example instead of writing void swap(int, int);
void swap(float, float);
.... you would write void swap(T, T); once, and have the compiler generate specific code for whatever T might be, when swap() is actually used in the code. Generic programming is supported to varying degrees by C++, Java, and C#. It is important to note that many languages, such as C++, support multiple paradigms. It is also true that even when a language is said to support a particular paradigm, it may not support all the paradigm's features. Not to mention that there is a lot of disagreement as to which features are required for a particular paradigm. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/117119",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/34364/"
]
} |
117,179 | Possible Duplicate: What does the suffix after software engineer/developer job titles mean? (i.e. Software Developer III) work advancement titles I've been struggling to understand job hierarchy in software engineering. The system is further complicated because of the lack of consistent naming conventions when assigning roles: for example, some companies just have a "senior software developer" position while others have Software Engineer I, Software Engineer II, Software Engineer III, and so on. Even in the top level positions, we have things like "Principal Software Engineer" vs. "Staff Software Engineer". What is the standard hierarchy for software engineers? Is there a generally accepted pecking order? | Wikipedia gives a good overview of corporate titles and under the hierarchy for Information Technology companies you have the following: Chief Executive Officer Vice President Senior Project Manager / Senior Product Manager / Senior Software Architect Project Manager / Product Manager / Software Architect Project Lead / Senior Team Lead / Senior Technical Lead Module Lead / Team Lead / Technical Lead Senior Software Engineer / Senior QA Engineer Software Engineer / QA Engineer While each company will have it's own naming convention and resposibilities for a role, they do seem to fall within this basic hierarchy. Hope this helps you out some. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/117179",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/39717/"
]
} |
117,243 | I seem to be repeatedly stuck in a situation where release dates are set not based on anything technical, but because someone in Sales has committed to a customer by then. Based on discussions with friends in development at other companies, the same thing seems to happen. "Here is the committed feature set and here is the committed release date", and it's difficult to argue because at this point there is money riding on it, and that trumps everything. In general, is there a way to push back on this? If not for this release, what about in future? The problem I have is that the only way I see one way of doing so, and that's by doing the best I can, but release the software 'as is', so to speak. I don't want to release bug-ridden software since it's my name attached, but doing 80 hour weeks for months at a time just vindicates the arbitrarily set release date. edit: for the record, I'm not doing 80 hour weeks now, just that comes to mind as what would be required to cover the expected feature set by the release date. | Stop doing the 80 hour weeks. This is positive reinforcement . Because they are getting the product on time with expected costs, they are going to continue doing it, regardless of what it does to you. If they cannot budget time properly, then that's management's fault. Not yours. Let them miss a few deadlines. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/117243",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/32502/"
]
} |
117,253 | I feel that the often seen C/C++ doesn't really describe my skills in my CV. So I'm planning to separate it into advanced C++ knowledge and mediocre C skills. Do you think this is confusing for the reader? She could think: "C is a subset of C++, so what is this guy trying to tel me?" Well, what I'm trying to tell is: I have done several real world C++ projects while pure C projects where just a hobby thing. Do you agree that a skilled C++ programmer not necessarily is a qualified C guy or do you think that this switch is done easily? | You have asked so many questions in one; let me try to answer while segregating them. I hire many people which falls in this profile type and quite often i have to take so many interviews and reject people because often they don't quite have clear answer to the questions you asked. Is having good mastery in C or C++ is good enough to qualify you for the other program? If you are at a senior level actually NO . i.e. if you have been something like 6 years doing C++ programming on some core enterprise applications, and now i am trying to put you in embedded systems which is all in C - likely that your programming syntax and how to debug stuff is not an issue. But If you need think through of a problem - you are certainly very messy. This is true from either side to the other language. The more years you have been spending only one type of language - less easy to transform in other form! It is not about whether you can learn the syntax of language but you actually think differently when you are in C, C++, Java, perl and Python. To stretch the question - most often - C++ and Java guys can be used interchangeably and so is Perl,PHP, Phython. C is quite a different breed! If you are a relatively young guy - chances are that you can catch up fast. Does it mean that i have higher skill when i know C++ rather than C Actually No. No because, as a general rule, if you are capable of creating a full product out of C is much more difficult task compared to doing it in C++. Number of people who can master troubleshooting shared memory systems are much less than number of people who can write a decent GUI program using VC++ or similar framework. Does this mean knowing C (or the lowest level programming) is the highest level of achievement? No again! This is not contradiction. This time it depends on the domain you compare. If you are looking at systems programming inside the Linux kernel, or something very close to hardware, programming ability in C is more relevant, However, if you are writing banking software or some business rule engines - C++ is a usually natural choice. The point is your true strength is not much about the syntax of the language but the way you solve a class of problems and you can only hope to master a few catagories/domain in your life. If you are putting something in your resume - that is what really counts. Does it mean that if have only been in C - i don't know Object oriented programming? Not at all. In fact, my litmus test in the interview to know whether guy is from C background or C++ is to ask a very simple question - "So can you do Object Oriented Programing in C?" - the guy jumps and says - "Definitely NO!" he/she is C++ fellow. The point is, when you really write very complex code like multimedia code, a multi-threaded system, a protocol layer stack, you still 'think like object code' - C compiler doesn't punish you if you are bad at encapsulating two routines or objects - but when the system scales that keeps firing you - i believe at times - there are many who are born in the era of object orientation, quite often are fairly weak on exactly how encapsulation really gets violated in a code which is pretty much filled with classes and objects. Sorry i digressed . But the point is - it is more of your skills of problem analysis and design that matters more than your programming skills alone. Does it mean i should put domain specific exposure and design skills saperately? Definitely yes! | {
"source": [
"https://softwareengineering.stackexchange.com/questions/117253",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/27844/"
]
} |
117,348 | I have been reading " Clean Code " by Robert Martin to hopefully, become a better programmer. While none of it so far has been really ground breaking it has made me think differently about the way I design applications and write code. There is one part of the book that I not only don't agree with, but doesn't make sense to me, specifically in regards to interface naming conventions. Here's the text, taken directly from the book. I have bolded the aspect of this I find confusing and would like clarification on. I prefer to leave interfaces unadorned. The preceding I, so common in today’s legacy wads, is a distraction at best and too much information at worst. I don’t want my users knowing that I’m handing them an interface . Perhaps it is because I'm only a student, or maybe because I have never done any professional or team based programming but I would want the user to know it is an interface. There's a big difference between implementing an interface and extending a class. So, my question boils down to, "Why should we hide the fact that some part of the code is expecting an interface?" Edit In response to an answer: If your type is an interface or a class is your business, not the business of someone using your code. So you shouldn't leak details of your code in this thrid party code. Why should I not "leak" the details of whether a given type is an interface or a class to third-party code? Isn't it important to the third-party developer using my code to know whether they will be implementing an interface or extending a class? Are the differences simply not as important as I'm making them out to be in my mind? | If you stop to think about it, you'll see that an interface really isn't semantically much different from an abstract class: Both have methods and/or properties (behaviour); Neither should have non-private fields (data); Neither can be instantiated directly; Deriving from one means implementing any abstract methods it has, unless the derived type is also abstract. In fact, the most important distinctions between classes and interfaces are: Interfaces cannot have private data; Interface members cannot have access modifiers (all members are "public"); A class can implement multiple interfaces (as opposed to generally being able to inherit from only one base class). Since the only particularly meaningful distinctions between classes and interfaces revolve around (a) private data and (b) type hierarchy - neither of which make the slightest bit of difference to a caller - it's generally not necessary to know if a type is an interface or a class. You certainly don't need the visual indication. However, there are certain corner cases to be aware of. In particular, if you're using reflection, interception, dynamic proxies/mixins, bytecode weaving, code generation, or anything that involves messing directly with the environment's typing system or code itself - then it's very helpful and sometimes necessary to know right off the bat whether you're dealing with an interface or a class. You clearly don't want your code to mysteriously fail because you tried to add a class, rather than an interface, as a mixin. For typical, vanilla, run-of-the-mill business logic code, though, the distinctions between abstract classes and interfaces do not need to be advertised because they'll never come into play. All of this being said, I tend to prefix my C# interfaces with I anyway because that is the .NET convention used and advocated by Microsoft. And when I'm explaining coding conventions to a new developer, it's far less hassle to just use Microsoft's rules than to explain why we have our own "special" rules. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/117348",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/28038/"
]
} |
117,357 | Is Entity Framework 4 a good solution for a public website with potentially 1000 hits/second? In my understanding EF is a viable solution for mostly smaller or intranet websites, but wouldn't scale easily for something like a popular community website (I know SO is using LINQ to SQL, but.. I'd like more examples/proof...) Now I am standing at the crossroads of either choosing a pure ADO.NET approach or EF4. Do you think the improved developer productivity with EF is worth the lost performance and granular access of ADO.NET (with stored procedures)? Any serious issues that a high traffic website might face, was it using EF? Thank you in advance. | The question "which ORM should I use" is really targeting the tip of a huge iceberg when it comes to the overall data access strategy and performance optimization in a large scale application. All of the following things ( roughly in order of importance) are going to affect throughput, and all of them are handled (sometimes in different ways) by most of the major ORM frameworks out there: Database Design and Maintenance This is, by a wide margin, the single most important determinant of the throughput of a data-driven application or web site, and often totally ignored by programmers. If you don't use proper normalization techniques, your site is doomed. If you don't have primary keys, almost every query will be dog-slow. If you use well-known anti-patterns such as using tables for Key-Value Pairs (AKA Entity-Attribute-Value) for no good reason, you'll explode the number of physical reads and writes. If you don't take advantage of the features the database gives you, such as page compression, FILESTREAM storage (for binary data), SPARSE columns, hierarchyid for hierarchies, and so on (all SQL Server examples), then you will not see anywhere near the performance that you could be seeing. You should start worrying about your data access strategy after you've designed your database and convinced yourself that it's as good as it possibly can be, at least for the time being. Eager vs. Lazy Loading Most ORMs used a technique called lazy loading for relationships, which means that by default it will load one entity (table row) at a time, and make a round-trip to the database every time it needs to load one or many related (foreign key) rows. This isn't a good or bad thing, it rather depends on what's actually going to be done with the data, and how much you know up-front. Sometimes lazy-loading is absolutely the right thing to do. NHibernate, for example, may decide not to query for anything at all and simply generate a proxy for a particular ID. If all you ever need is the ID itself, why should it ask for more? On the other hand, if you are trying to print a tree of every single element in a 3-level hierarchy, lazy-loading becomes an O(N²) operation, which is extremely bad for performance. One interesting benefit to using "pure SQL" (i.e. raw ADO.NET queries/stored procedures) is that it basically forces you to think about exactly what data is necessary to display any given screen or page. ORMs and lazy-loading features don't prevent you from doing this, but they do give you the opportunity to be... well, lazy , and accidentally explode the number of queries you execute. So you need to understand your ORMs eager-loading features and be ever vigilant about the number of queries you're sending to the server for any given page request. Caching All major ORMs maintain a first-level cache, AKA "identity cache", which means that if you request the same entity twice by its ID, it doesn't require a second round-trip, and also (if you designed your database correctly) gives you the ability to use optimistic concurrency. The L1 cache is pretty opaque in L2S and EF, you kind of have to trust that it's working. NHibernate is more explicit about it ( Get / Load vs. Query / QueryOver ). Still, as long as you try to query by ID as much as possible, you should be fine here. A lot of people forget about the L1 cache and repeatedly look up the same entity over and over again by something other than its ID (i.e. a lookup field). If you need to do this then you should save the ID or even the entire entity for future lookups. There's also a level 2 cache ("query cache"). NHibernate has this built-in. Linq to SQL and Entity Framework have compiled queries , which can help reduce app server loads quite a bit by compiling the query expression itself, but it doesn't cache the data. Microsoft seems to consider this an application concern rather than a data-access concern, and this is a major weak point of both L2S and EF. Needless to say it's also a weak point of "raw" SQL. In order to get really good performance with basically any ORM other than NHibernate, you need to implement your own caching façade. There's also an L2 cache "extension" for EF4 which is okay , but not really a wholesale replacement for an application-level cache. Number of Queries Relational databases are based on sets of data. They're really good at producing large amounts of data in a short amount of time, but they're nowhere near as good in terms of query latency because there's a certain amount of overhead involved in every command. A well-designed app should play to the strengths of this DBMS and try to minimize the number of queries and maximize the amount of data in each. Now I'm not saying to query the entire database when you only need one row. What I'm saying is, if you need the Customer , Address , Phone , CreditCard , and Order rows all at the same time in order to serve a single page, then you should ask for them all at the same time, don't execute each query separately. Sometimes it's worse than that, you'll see code that queries the same Customer record 5 times in a row, first to get the Id , then the Name , then the EmailAddress , then... it's ridiculously inefficient. Even if you need to execute several queries that all operate on completely different sets of data, it's usually still more efficient to send it all to the database as a single "script" and have it return multiple result sets. It's the overhead you're concerned with, not the total amount of data. This might sound like common sense but it's often really easy to lose track of all the queries that are being executed in various parts of the application; your Membership Provider queries the user/role tables, your Header action queries the shopping cart, your Menu action queries the site map table, your Sidebar action queries the featured product list, and then maybe your page is divided into a few separate autonomous areas which query the Order History, Recently Viewed, Category, and Inventory tables separately, and before you know it, you're executing 20 queries before you can even start to serve the page. It just utterly destroys performance. Some frameworks - and I'm thinking mainly of NHibernate here - are incredibly clever about this and allow you to use something called futures which batch up entire queries and try to execute them all at once, at the last possible minute. AFAIK, you're on your own if you want to do this with any of the Microsoft technologies; you have to build it into your application logic. Indexing, Predicates, and Projections At least 50% of devs I speak to and even some DBAs seem to have trouble with the concept of covering indexes. They think, "well, the Customer.Name column is indexed, so every lookup I do on the name should be fast." Except it doesn't work that way unless the Name index covers the specific column you're looking up. In SQL Server, that's done with INCLUDE in the CREATE INDEX statement. If you naïvely use SELECT * everywhere - and that is more or less what every ORM will do unless you explicitly specify otherwise using a projection - then the DBMS may very well choose to completely ignore your indexes because they contain non-covered columns. A projection means that, for example, instead of doing this: from c in db.Customers where c.Name == "John Doe" select c You do this instead: from c in db.Customers where c.Name == "John Doe"
select new { c.Id, c.Name } And this will, for most modern ORMs, instruct it to only go and query the Id and Name columns which are presumably covered by the index (but not the Email , LastActivityDate , or whatever other columns you happened to stick in there). It's also very easy to completely blow away any indexing benefits by using inappropriate predicates. For example: from c in db.Customers where c.Name.Contains("Doe") ...looks almost identical to our previous query but in fact will result in a full table or index scan because it translates to LIKE '%Doe%' . Similarly, another query which looks suspiciously simple is: from c in db.Customers where (maxDate == null) || (c.BirthDate >= maxDate) Assuming you have an index on BirthDate , this predicate has a good chance to render it completely useless. Our hypothetical programmer here has obviously attempted to create a kind of dynamic query ("only filter the birth date if that parameter was specified"), but this isn't the right way to do it. Written like this instead: from c in db.Customers where c.BirthDate >= (maxDate ?? DateTime.MinValue) ...now the DB engine knows how to parameterize this and do an index seek. One minor, seemingly insignificant change to the query expression can drastically affect performance. Unfortunately LINQ in general makes it all too easy to write bad queries like this because sometimes the providers are able to guess what you were trying to do and optimize the query, and sometimes they aren't. So you end up with frustratingly inconsistent results which would have been blindingly obvious (to an experienced DBA, anyway) had you just written plain old SQL. Basically it all comes down to the fact that you really have to keep a close eye on both the generated SQL and the execution plans they lead to, and if you're not getting the results you expect, don't be afraid to bypass the ORM layer once in a while and hand-code the SQL. This goes for any ORM, not just EF. Transactions and Locking Do you need to display data that's current up to the millisecond? Maybe - it depends - but probably not. Sadly, Entity Framework doesn't give you nolock , you can only use READ UNCOMMITTED at the transaction level (not table level). In fact none of the ORMs are particularly reliable about this; if you want to do dirty reads, you have to drop down to the SQL level and write ad-hoc queries or stored procedures. So what it boils down to, again, is how easy it is for you to do that within the framework. Entity Framework has come a long way in this regard - version 1 of EF (in .NET 3.5) was god-awful, made it incredibly difficult to break through the "entities" abstraction, but now you have ExecuteStoreQuery and Translate , so it's really not too bad. Make friends with these guys because you'll be using them a lot. There's also the issue of write locking and deadlocks and the general practice of holding locks in the database for as little time as possible. In this regard, most ORMs (including Entity Framework) actually tend to be better than raw SQL because they encapsulate the unit of Work pattern, which in EF is SaveChanges . In other words, you can "insert" or "update" or "delete" entities to your heart's content, whenever you want, secure in the knowledge that no changes will actually get pushed to the database until you commit the unit of work. Note that a UOW is not analogous to a long-running transaction. The UOW still uses the optimistic concurrency features of the ORM and tracks all changes in memory . Not a single DML statement is emitted until the final commit. This keeps transaction times as low as possible. If you build your application using raw SQL, it's quite difficult to achieve this deferred behaviour. What this means for EF specifically: Make your units of work as coarse as possible and don't commit them until you absolutely need to. Do this and you'll end up with much lower lock contention than you would using individual ADO.NET commands at random times. In Conclusion: EF is completely fine for high-traffic/high-performance applications, just like every other framework is fine for high-traffic/high-performance applications. What matters is how you use it. Here's a quick comparison of the most popular frameworks and what features they offer in terms of performance (legend: N = Not supported, P = Partial, Y = yes/supported): | L2S | EF1 | EF4 | NH3 | ADO
+-----+-----+-----+-----+-----
Lazy Loading (entities) | N | N | N | Y | N
Lazy Loading (relationships) | Y | Y | Y | Y | N
Eager Loading (global) | N | N | N | Y | N
Eager Loading (per-session) | Y | N | N | Y | N
Eager Loading (per-query) | N | Y | Y | Y | Y
Level 1 (Identity) Cache | Y | Y | Y | Y | N
Level 2 (Query) Cache | N | N | P | Y | N
Compiled Queries | Y | P | Y | N | N/A
Multi-Queries | N | N | N | Y | Y
Multiple Result Sets | Y | N | P | Y | Y
Futures | N | N | N | Y | N
Explicit Locking (per-table) | N | N | N | P | Y
Transaction Isolation Level | Y | Y | Y | Y | Y
Ad-Hoc Queries | Y | P | Y | Y | Y
Stored Procedures | Y | P | Y | Y | Y
Unit of Work | Y | Y | Y | Y | N As you can see, EF4 (the current version) doesn't fare too badly, but it's probably not the best if performance is your primary concern. NHibernate is much more mature in this area and even Linq to SQL provides some performance-enhancing features that EF still doesn't. Raw ADO.NET is often going to be faster for very specific data-access scenarios, but, when you put all the pieces together, it really doesn't offer a lot of important benefits that you get from the various frameworks. And, just to make completely sure that I sound like a broken record, none of this matters in the slightest if you don't design your database, application, and data access strategies properly. All of the items in the chart above are for improving performance beyond the baseline; most of the time, the baseline itself is what needs the most improvement. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/117357",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/39793/"
]
} |
117,460 | I am currently on a paid internship, and have been tasked with maintaining an obsolete system that has been developed by multiple developers (at different times) over the course of the past 5 years. Management agrees the "system is on life support", and I receive a fairly regular supply of bug reports from end users currently using the system. Management now wants to extend the project for another year, and in the process nearly triple the user base. As an intern (or any entry level position) how do I "push back"? I've already written a report stating my concerns, albeit in an open-ended document. Is there protocol or document type for suggesting changes? Am I in a position to make suggestions, or should I simply continue to support the old system? To clarify, software development is not my company's primary business. As such no internal protocols exist. Additionally, the project has no formal documentation at all, and no requirements documents either. The development is very ad hoc. | I am currently on a paid internship, and have been tasked with maintaining an obsolete system that has been developed by multiple developers (at different times) over the course of the past 5 years. Management agrees the "system is on life support", and I receive a fairly regular supply of bug reports from end users currently using the system. The system isn't obsolete if people are still using it and it's supporting the business activities. Since it's still being used, the business can't just throw it away - it needs to be supported until the need for the system no longer exists. That could be a change in business objectives or a new system has been developed, tested, and deployed successfully to the end users. Really, 5 years isn't that long. I've worked with code that was 10 years old before. If it's still serving the needs of the user, why throw it away? That's throwing away a lot of money spent to develop it. Until it becomes unfeasible to maintain due to increasing costs or the requirements change drastically, there's no business reason to throw it away. Management now wants to extend the project for another year, and in the process nearly triple the user base. If management says that this system is "on life support", why are they trying to deploy it further? It's common that maintenance activities continue on a legacy system until it's replaced, but if a system is in end-of-life, it's not typically deployed to more people. Extending maintenance is one thing, but adding users who rely on the system is a different situation all together. To me, it sounds like it's not actually end of life, but rather in a maintenance phase and will continue to be there until the system no longer serves the needs of the users. As an intern (or any entry level position) how do I "push back"? I've already written a report stating my concerns, albeit in an open-ended document. Is there protocol or document type for suggesting changes? Am I in a position to make suggestions, or should I simply continue to support the old system? You need to continue to support the old system. Later, you mention that software is not your company's primary business. In such an environment, the job of software teams is to support the company's primary business. However, the software teams also need to keep the business objectives in mind. In the mean time, capture your suggestions in a way that isn't overbearing. Point out other technologies or techniques that could be integrated with the system or used if/when a new system is created and their pros/cons. How you do this depends on the company, but considering some later points, perhaps establishing a wiki or other collaborative site would be useful. In a non-software business, software is a cost and the software teams (especially the software project/program managers) should be working to minimize the cost of building and maintaining software systems as much as possible, while supporting the needs of the end users. Throwing away software that (as far as I can tell, from your post, anyway) meets the needs of the users goes against what's in the best interests of the software team. *To clarify, software development is not my company's primary business. As such no internal protocols exist. Additionally, the project has no formal documentation at all, no requirements documents. The development is very ad hoc. To me, this is the problem. Not producing documentation, not developing to a specification, and a lack of consistency tends to increase the cost of developing software. Working toward fixing this would be my highest priority, and I would do that by working on things like a coding standard, version control, producing self-documenting code and design documents, defect tracking, and requirements specifications. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/117460",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/39814/"
]
} |
117,481 | I think most people would agree that pointers are a major source of bugs in C programs (if not the greatest source of bugs). Other languages drop pointers entirely for this reason. When working in C, then, would it be best to avoid using pointers whenever practicable? For example, I recently wrote a function like this: void split (char *record, char *delim,
int numfields, int fieldmax,
char result[numfields][fieldmax]); While not as versatile as a dynamically allocated variable, in that it requires the programmer to know the number of fields in advance and anything over fieldmax is truncated (which is sometimes acceptable), it eliminates the need for memory management, and the potential for memory corruption. I think this is a pretty good trade, but I was wondering what the opinions of other programmers on this were. | Other languages don't "drop pointers entirely," they just restrict what you can do with them, give them a syntax that looks like non-pointer variables, handle some operations on them behind the scenes, and call them something else, like a reference or object. If you ever have to make a distinction between assigning a copy of something or not, you are dealing with pointers. With the exception of programming paradigms that pass everything by copy, with obvious performance implications, the use of pointers is unavoidable in all but the simplest of programs. By all means, use stack variables where it makes sense, but if you try to avoid pointers too much, you will introduce a whole different breed of potential bugs. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/117481",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/39821/"
]
} |
117,496 | A common pattern I see is what's known as the Mapper pattern (not to be confused with DataMapper which is something else entirely), which takes as an argument some kind of "raw" data source (e.g. an ADO.NET DataReader or DataSet ) and maps the fields to properties on a business/domain object. Example: class PersonMapper
{
public Person Map(DataSet ds)
{
Person p = new Person();
p.FirstName = ds.Tables[0].Rows[0]["FirstName"].ToString();
// other properties...
return p;
}
} The idea being your Gateway/DAO/Repository/etc. will call into the Mapper before it returns, so you get a rich business object versus the underlying data container. However, this seems to be related, if not identical, to the Factory pattern (in the DDD parlance, anyways), which constructs and returns a domain object. Wikipedia says this re: the DDD Factory: Factory: methods for creating domain objects should delegate to a specialized Factory object such that alternative implementations may be easily interchanged. From that quote the only difference I could think of is that the DDD-style Factory could be parameterized so it could return a specialized type of object if the need arose (e.g. BusinessCustomer versus ResidentialCustomer) while the "Mapper" is keyed to a specific class and only does translation. So is there a difference between these two patterns or are they essentially the same thing with different names? | Although this is the first time I hear of the Mapper pattern, to me it sounds more like the Builder pattern rather than the Factory. In the Factory pattern you encapsulate the logic for creating objects of a number of related classes. The prime example would be a situation where you need to create an object of a particular subclass of some abstract base class depending on some parameters. So a Factory always returns a pointer or a reference to the base class, but it actually creates an object of the appropriate derived class based on the parameters you give it. In contrast, a Builder class always creates objects of the same class. You would use it if the creation of an object is complicated, e. g. its constructor takes lots of arguments, not all of which may be available instantly. So a builder object might be a place which stores the values for the constructor arguments until you have them all and are ready to create the "product", or it may provide reasonable default values and let you only specify the arguments whose values you need to change. A typical use case for the Builder pattern is for creating objects you might need in a unit test, to avoid cluttering the test code with all the creation logic. To me a Mapper sounds like a variant of a Builder, where the constructor parameters come in the form of a database record or some other "raw" data structure. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/117496",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/22390/"
]
} |
117,522 | What are combinators? I'm looking for: a practical explanation examples of how they are used examples of how combinators improve the quality/generality of code I'm not looking for: explanations of combinators that don't help me get work done (such as the Y-combinator) | From a practical viewpoint combinators are kind of programming constructs that allow you to put together pieces of logic in interesting and often advanced manners. Typically using them depends on the possibility of being able to pack executable code into objects, often called (for historical reasons) lambda functions or lambda expressions, but your mileage can vary. A simple example of a (useful) combinator is one that takes two lambda functions without parameters, and creates a new one that runs them in sequence. The actual combinator looks in generic pseudocode like this: func in_sequence(first, second):
lambda ():
first()
second() The crucial thing that makes this a combinator is the anonymous function (lambda function) on the second line; when you call a = in_sequence(f, g) the resulting object a is not the result of running first f() and then g(), but it is an object that you can call later to execute f() and g() in sequence: a() // a is a callable object, i.e. a function without parameters You can similarly then have a combinator that runs two code blocks in parallel: func in_parallel(first, second):
lambda ():
t1 = start_thread(first)
t2 = start_thread(second)
wait(t1)
wait(t2) And then again, a = in_parallel(f, g)
a() The cool thing is that 'in_parallel' and 'in_sequence' are both combinators with the same type / signature, i.e. they both take two parameterless function objects and return a new one. You can actually then write things like a = in_sequence(in_parallel(f, g), in_parallel(h, i)) and it works as expected. Basically so combinators allow you to construct your program's control flow (among other things) in a procedural and flexible fashion. For example, if you use in_parallel(..) combinator to run parallelism in your program, you can add debugging related to that to the implementation of the in_parallel combinator itself. Later, if you suspect that your program has parallelism-related bug, you can actually just reimplement in_parallel: in_parallel(first, second):
in_sequence(first, second) and with one stroke, all the parallel sections have been converted into sequential ones! Combinators are very useful when used right. The Y combinator, however, is not needed in real life. It is a combinator that allows you to create self-recursive functions, and you can create them easily in any modern language without the Y combinator. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/117522",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/-1/"
]
} |
117,552 | I don't know if this is too broad or not, but I am a youngish programmer still in college, its my Junior year. I feel like I have a pretty good grasps for different languages and have a pretty good base. But I am stumbling to think how if for example, I am trying to create a program and say I wrote one part in python just because its easy, and does the job, but this program would need to get output from another program that I wrote in C and I am using C because of its speed. I am not sure how to have the two different programs and languages interact with each other to create an overall one total program. I am thinking of like sure you can write to a file, but then what if python and C programs both accessing a file I would need to think of locks. Most times I have done this was with importing files into a program, but in that case they are the same language so that is easy I just use the import function, but with two languages/programs interacting to create one cohesive output I am having trouble. I was thinking about this question because I was thinking of diving into creating some basic web applications just to learn but I have no idea how to have say javascript file interactive with something that I wrote in python or vice-versa. I feel like I am missing something really easy here and just not understanding. Sorry if this question is too broad but I couldn't really find a clear answer online, I was trying to look through an opensource webapp, but couldn't really grasp an answer from it, again pardon me if the question seems dumb I thought this be a good place to ask I love reading on stackexchange. Thank you for any reply. | Code written in different languages can interact in a number of ways. At the source level, cross-compilation from one language into the other can be done for some combinations of languages (for example, Google's GWT includes a java-to-javascript compiler; the Glasgow Haskell compiler can compile to C; early versions of C++ compiled to C). Most of the time, however this is not really feasible. Languages that share a virtual platform, such as the JVM or the .NET runtime, can usually interact through mechanisms exposed by the platform - for example all JVM languages can access Java libraries and use them to communicate among each other, and they can call methods and use classes created in any other JVM language. Many programming languages, including Python, offer a mechanism to interface with native libraries, typically written in C. Using such a mechanism, it is possible to call native functions from another, more high-level, language. Popular libraries often have bindings readily available. This technique is usually referred to as a "Foreign Function Interface" . The Python-into-C interface is the CFFI . Another option is to build two completely separate programs and have them interact at runtime. There are various mechanisms to achieve this; the easiest is through a pipe (look into the subprocess module for python): basically, one program calls the other, sending input to its stdin and reading the result back from its stdout. This makes one program a subprocess of the other; if you need both to be long-lived and started independently, data can be passed back and forth through named pipes, (local) network sockets, shared files, and (depending on the platform) other means. Which one is best depends. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/117552",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/39846/"
]
} |
117,635 | For example, if the code generates a random int from 0-10, and takes a different branch on each outcome, how can one design a test suite to guarantee 100% statement coverage in such code? In Java, the code might be something like: int i = new Random().nextInt(10);
switch(i)
{
//11 case statements
} | Expanding David's answer whom I totally agree with that you should create a wrapper for Random. I wrote pretty much the same answer about it earlier in a similar question so here is a "Cliff's notes version" of it. What you should do is to first create the wrapper as an interface (or abstract class): public interface IRandomWrapper {
int getInt();
} And the concrete class for this would look like this: public RandomWrapper implements IRandomWrapper {
private Random random;
public RandomWrapper() {
random = new Random();
}
public int getInt() {
return random.nextInt(10);
}
} Say your class is the following: class MyClass {
public void doSomething() {
int i=new Random().nextInt(10)
switch(i)
{
//11 case statements
}
}
} In order to use the IRandomWrapper correctly you need to modify your class to take it as a member (through constructor or a setter): public class MyClass {
private IRandomWrapper random = new RandomWrapper(); // default implementation
public setRandomWrapper(IRandomWrapper random) {
this.random = random;
}
public void doSomething() {
int i = random.getInt();
switch(i)
{
//11 case statements
}
}
} You can now test your class's behaviour with the wrapper, by mocking the wrapper. You can do this with a mocking framework, but this is easy to do by yourself as well: public class MockedRandomWrapper implements IRandomWrapper {
private int theInt;
public MockedRandomWrapper(int theInt) {
this.theInt = theInt;
}
public int getInt() {
return theInt;
}
} Since your class expects something that looks like an IRandomWrapper you can now use the mocked one to force the behaviour in your test. Here are some examples of JUnit tests: @Test
public void testFirstSwitchStatement() {
MyClass mc = new MyClass();
IRandomWrapper random = new MockedRandomWrapper(0);
mc.setRandomWrapper(random);
mc.doSomething();
// verify the behaviour for when random spits out zero
}
@Test
public void testFirstSwitchStatement() {
MyClass mc = new MyClass();
IRandomWrapper random = new MockedRandomWrapper(1);
mc.setRandomWrapper(random);
mc.doSomething();
// verify the behaviour for when random spits out one
} Hope this helps. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/117635",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/5539/"
]
} |
117,638 | I had been programming for many years but wanted a diploma to make myself more employable. Having already been through university once, I didn't choose a full 5 year computer science major but a shorter, more practically-oriented software engineering program. I expected that it might focus more on concrete skills than on theory, but still had this idea that universities and professors like things to be correct, formal, academic. It's still science, right? Wrong - I was surprised by how sloppy many IT courses were. In an introductory course on C++ we were tested on clichés like "why are globals bad" and "why are constants good", after just writing 1-2 programs. Random anecdotes without proper context. Handouts contained system(pause/cls) , getch and headers like conio.h and iodos.h . One of the tasks was to print "ASCII characters" from 32 to 255 , with a screenshot showing such a table printed using the Windows-1252 code page, but without mentioning encoding at all. Question : when a university/professor seems to be using inferior and/or outdated tools and methods, and the content being taught is borderline incorrect, how do you deal with it constructively and respectfully, if at all? Some answers point out that you should look beyond the programming since it is just a tool for learning about topics such as data structures and algorithms. I agree with this idea, but in this case there wasn't really any such plan behind the poor style. Most courses would simply teach another "tool" without much background theory or any "big picture". It often felt like they were quickly put together just for the sake of offering such a course. I stuck with it and finally graduated. Quality remained pretty low throughout (with a few great exceptions), and several other students have been complaining about it. As expected I have learned much more from personal projects and part-time jobs than from school, however the process of finishing school and the label "software student" seem mysteriously useful in themselves! | Math and programming are totally different things. Math is science, programming is technique. In academic world, programming is something you have to know so that you could deal with the real stuff, the one that's actually interesting and matters - algorithms. Who cares if the compiler is ancient and you use system calls? Who cares if you have Linux and not Windows? Well, the professors don't, that's for sure. And they shouldn't, really. Don't expect to become a professional programmer from courses in the academic institution. That's not what you go there for. That's true that the compiler he's expecting you to use is ancient, and the assumption about the OS is anachronistic, and its problematic. You can raise it, and maybe it will be dealt with. But not because the course is incompatible with the industry, but rather because it causes an immediate difficulty to the students. Go find that old compiler now and have it running on your MacBook... In general, academic studies shouldn't be wasted on learning C++ and Android, you should be learning the actual Computer Science stuff there. You won't get another chance for that. Android? Download Eclipse with the ADT and start working on it at home, like I do. Don't need school for that. I think that it is not OK to expect technique development from the universities. Especially not the research universities. You can say that a place that only offers bachelors should be more industry-targeted, but research universities - want researchers. The professors are looking for prospective graduate students and PhD candidates, not excellent programmers. So I think you should set your level of expectations accordingly. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/117638",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/39880/"
]
} |
117,643 | In the movie theatre I go to they have ticket kiosks that allow you to select the seats you want; they also have a website that does the same (the website also has a countdown timer of like 30 secs in which you must choose a seat). While I understand things such as database transactions and other techniques for handling multiple simultaneous users, I just can't get my head around how multiple people can be allowed to select a seat at the same time; is it as simple as the first one to press BUY gets the seats and the other person will get an error message, or am I missing something? | The classic method to do this is to use a transactional database (so there's no clashes) and to do a tentative allocation of the seat to you that expires after some length of time (e.g., 10 minutes for kiosks) that gives you enough time to pay. If the (customer-visible) transaction falls through or times out, the seat allocation can be released back into the pool. (All state changes are processed via the transactional database, and one customer-visible transaction might require many database-level transactions.) Airlines will use a similar system (though much more complex due to the need to handle multiple flight legs!) for booking seats online. I would imagine that the timeout would be considerably longer; airline tickets are usually booked further ahead than movie tickets, and are more expensive as well. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/117643",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/39878/"
]
} |
117,651 | We are in the middle of a strange situation. We and a partner firm have to put up a web portal. They have a lot of functionality already done on their other portal and we are the managing firm on the project. But there are also some new development needed on their side which is designed by us. Strange thing is, we are going to do the GUI on the project and they will provide us some web services to gather data. They are using .NET and we will be using PHP for the frontend project. So what do you think that the best approach is to avoid communication and integration problems? For example We provide them with just the view files and they do the rest all by themselves? We develop a PHP project where we gather the required data using SOAP/REST web services? Thanks in advance guys. | The classic method to do this is to use a transactional database (so there's no clashes) and to do a tentative allocation of the seat to you that expires after some length of time (e.g., 10 minutes for kiosks) that gives you enough time to pay. If the (customer-visible) transaction falls through or times out, the seat allocation can be released back into the pool. (All state changes are processed via the transactional database, and one customer-visible transaction might require many database-level transactions.) Airlines will use a similar system (though much more complex due to the need to handle multiple flight legs!) for booking seats online. I would imagine that the timeout would be considerably longer; airline tickets are usually booked further ahead than movie tickets, and are more expensive as well. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/117651",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/738/"
]
} |
117,671 | I do freelance web projects for a client. The client has been asking me to buy an iPad for testing purposes. Should I ask him to get me an iPad ? I otherwise don't have any need for the iPad. Is it ethical to ask for sponsorship when you are getting paid for the projects ? Should I try it out ? | I'd say that it is 100% ethical, and yes, I would ask my client to supply me with any non-standard tools that are required for a project. I would also say that the client has every right to ask for the tools to be returned to them at the conclusion of the project. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/117671",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/37325/"
]
} |
117,716 | I have a possible C# dev job lined up but before I can interview, I need to pass a test with the employment agency. Now even though I am a senior developer with over 10 years of programming experience and more than 3 of these years with c#, I am really not looking forward to this test and I think there is a good chance I will fail it. In my experience these tests are unpredictable asking you stuff you seldom use in everyday work situations. I need your best advice on what I should be revising on - a good book or website for example. I have less than 1 week to prepare and it is essential I pass this. Thanks in advance. EDIT: To further justify my concerns of writing such tests - Consider the following question: 12) An Event is : a) The result of a users action - correct answer
b) The esult of a party
c) code to force users action The correct answer according to the test is A, but since I've been doing SOA development (often when events can be based on system events (not user actions) I'm 100% sure an event doesn't have to be driven by a user action. This was also a C# question (not an ASP.net question). Based on my understanding, I couldn't really spot a correct answer but B seems the best of the lot (if the definition of party is : any entity that can trigger an event.
These kind of questions scare me. | Altough somewhat old, maybe this blog post is useful for you: What Great .NET Developers Ought To Know (More .NET Interview Questions) Everyone who writes code Describe the difference between a Thread and a Process? What is a Windows Service and how does its lifecycle differ from a "standard" EXE? What is the maximum amount of memory any single process on Windows can address? Is this different than the maximum virtual memory for the system? How would this affect a system design? What is the difference between an EXE and a DLL? What is strong-typing versus weak-typing? Which is preferred? Why? Corillian's product is a "Component Container." Name at least 3 component containers that ship now with the Windows Server Family. What is a PID? How is it useful when troubleshooting a system? How many processes can listen on a single TCP/IP port? What is the GAC? What problem does it solve? Mid-Level .NET Developer Describe the difference between Interface-oriented, Object-oriented and Aspect-oriented programming. Describe what an Interface is and how it’s different from a Class. What is Reflection? What is the difference between XML Web Services using ASMX and .NET Remoting using SOAP? Are the type system represented by XmlSchema and the CLS isomorphic? Conceptually, what is the difference between early-binding and late-binding? Is using Assembly.Load a static reference or dynamic reference? When would using Assembly.LoadFrom or Assembly.LoadFile be appropriate? What is an Asssembly Qualified Name? Is it a filename? How is it different? Is this valid? Assembly.Load("foo.dll"); How is a strongly-named assembly different from one that isn’t strongly-named? Can DateTimes be null? What is the JIT? What is NGEN? What are limitations and benefits of each? How does the generational garbage collector in the .NET CLR manage object lifetime? What is non-deterministic finalization? What is the difference between Finalize() and Dispose()? How is the using() pattern useful? What is IDisposable? How does it support deterministic finalization? What does this useful command line do? tasklist /m "mscor*" What is the difference between in-proc and out-of-proc? What technology enables out-of-proc communication in .NET? When you’re running a component within ASP.NET, what process is it running within on Windows XP? Windows 2000? Windows 2003? Senior Developers/Architects What’s wrong with a line like this? DateTime.Parse(myString); What are PDBs? Where must they be located for debugging to work? What is cyclomatic complexity and why is it important? Write a standard lock() plus “double check” to create a critical section around a variable access. What is FullTrust? Do GAC’ed assemblies have FullTrust? What benefit does your code receive if you decorate it with attributes demanding specific Security permissions? What does this do? gacutil /l | find /i "Corillian" What does this do? sn -t foo.dll What ports must be open for DCOM over a firewall? What is the purpose of Port 135? Contrast OOP and SOA. What are tenets of each? How does the XmlSerializer work? What ACL permissions does a process using it require? Why is catch(Exception) almost always a bad idea? What is the difference between Debug.Write and Trace.Write? When should each be used? What is the difference between a Debug and Release build? Is there a significant speed difference? Why or why not? Does JITting occur per-assembly or per-method? How does this affect the working set? Contrast the use of an abstract base class against an interface? What is the difference between a.Equals(b) and a == b? In the context of a comparison, what is object identity versus object equivalence? How would one do a deep copy in .NET? Explain current thinking around IClonable. What is boxing? Is string a value type or a reference type? What is the significance of the "PropertySpecified" pattern used by the XmlSerializer? What problem does it attempt to solve? Why are out parameters a bad idea in .NET? Are they? Can attributes be placed on specific parameters to a method? Why is this useful? C# Component Developers Juxtapose the use of override with new. What is shadowing? Explain the use of virtual, sealed, override, and abstract. Explain the importance and use of each component of this string: Foo.Bar, Version=2.0.205.0, Culture=neutral, PublicKeyToken=593777ae2d274679d Explain the differences between public, protected, private and internal. What benefit do you get from using a Primary Interop Assembly (PIA)? By what mechanism does NUnit know what methods to test? What is the difference between: catch(Exception e){throw e;} and catch(Exception e){throw;} What is the difference between typeof(foo) and myFoo.GetType()? Explain what’s happening in the first constructor: public class c{ public c(string a) : this() {;}; public c() {;} } How is this construct useful? What is this ? Can this be used within a static method? ASP.NET (UI) Developers Describe how a browser-based Form POST becomes a Server-Side event like Button1_OnClick... | {
"source": [
"https://softwareengineering.stackexchange.com/questions/117716",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/20831/"
]
} |
117,751 | First the history: I found out DateTime data types cannot be null (Yes they can be nullable using the ? nullable declaration). I was told this is because they are value types and placed on the stack (not on the heap). I can understand why the stack is quicker and why value types need to (mostly) be placed on the stack, what I do not know is. Is it true the real limitation of why a value type cannot be null is because the stack cannot handle null values? If yes - Why? If No what is the real limitation. I understand in .net we often don't care about exact memory allocation, yet it is important in performance applications to take the stack and heap into consideration, so I'm trying to wrap my head around this. Thanks in advance for your expertise on a difficult question. EDIT: Another thing to add are nullable types hence placed on the heap or stack? | The heap and stack are not relevant. It is about Binary representation of values. In a reference type, a zero binary representation means null, and other binary representations point to memory that contains the binary representation of the value. In a value type, all binary representations, including zero, directly represent a particular value of the type. Value types cannot be null because there is no reserved binary representation that means null. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/117751",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/20831/"
]
} |
117,868 | I want to write an application that can be used to burn CDs (music). I know I should test it with real CDs anyway, but I don't want to do this every time I make a small change. But I do want to know and test if the right data will be burnt, so you don't get corrupted CDs. I am using a library which only exposes APIs to directly burn to a physical drive; it doesn't allow you to write to a disk image (which would be much easier to test). How could I test this without wasting CDs and time for every change/few changes I make? | The simple answer is often the best: re-writable CD's? Also for saving time, write relatively small files. Only do larger files every so often. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/117868",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/-1/"
]
} |
117,945 | I work for a company where we build enterprise applications, and we maintain three environments: development (or dev ), staging (or stage ) and production (or prod ). The meaning of dev is intuitive: it's the environment used during development of the application. What's the difference between staging and production environments? | For smaller companies (it's not clear how big yours is), three environments (dev, stage, production) are common. Larger companies will often have a QA environment between dev and stage. These normally break down as follows: dev : Working code copy. Changes made by developers are deployed here so integration and features can be tested. This environment is rapidly updated and contains the most recent version of the application. qa : (Not all companies will have this). Environment for quality assurance; this provides a less frequently changed version of the application which testers can perform checks against. This allows reporting on a common revision so developers know whether particular issues found by testers has already been corrected in the development code. staging : This is the release candidate, and this environment is normally a mirror of the production environment. The staging area contains the "next" version of the application and is used for final stress testing and client/manager approvals before going live. production : This is the currently released version of the application, accessible to the client/end users. This version preferably does not change except for during scheduled releases. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/117945",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/38762/"
]
} |
117,973 | I would really like to understand how an operating system works behind the scenes. However, I think that the kernel of most open-source OSs out there are too complex for learning purposes, even for a full-time programmer to learn. Are there any "simple" OSs made for learning purposes only out there? | Andrew Tanenbaum's Minix (see Operating Systems Design and Implementation ) is intended for exactly this sort of purpose. Another (albeit quite dated) possibility is to read through Lion's Book , which covers Unix V6 (full Unix, but an old enough version that it's still simple enough for fairly easy study). The obvious disadvantages of the latter are that the C it uses is quite obsolete, so even fairly experienced C programmers may find parts somewhat difficult to read, and you can't plan on a modern compiler digesting the code. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/117973",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/38684/"
]
} |
117,990 | If we refer to exceptions as bugs, why not just call it a bug in the first place instead of an exception? If in the code it's called exception and as soon as it occurs it's called a bug. Then why not call it a bug in the first place? Thank you for any answer or comment. | Well, it's pretty simple: not all exceptions are bugs (and similarly, not all bugs manifest themselves as exceptions). As example of an exception that's not a bug, if you're reading a file from a USB drive and someone yanks the drive out of the socket. That's going to raise an exception (in most languages that support exceptions, that is). But it's not a bug in the code. Conversely, a bug might manifest itself as a calculation error or something. You still get an answer, it's just not the right one. Having said that, an exception that makes its way all the way to the top of the stack likely is a bug. In my USB example above, you should be able to catch that exception and present a nice error to the user saying "We couldn't read from the file because it's no longer connected." or something. If you just present them with an IOException and some funky error code, then that's a bug. But the exception itself is not. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/117990",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/12893/"
]
} |
118,064 | Possible Duplicate: Are design patterns really essential nowadays? Is it necessary have a knowledge and understanding of design patterns for someone to be a professional programmer? Why? | No. I only say that to be pedantic though. The term "professional programmer" really just means you get paid to program. I assure you that you can be paid to program without knowing any design patterns (I certainly did, for a few years). However, knowing design patterns has some benefits: It's a way that other programmers tell us about general ways that they've solved general classes of problems. It's a name that you can apply to a specific pattern, and other programmers who know that pattern will understand what you mean. You can even put it into the name of the methods and classes ( XyzSingleton , for instance). In many ways it's a good way to learn the shortcomings of programming languages and paradigms. For instance the Visitor pattern exists because in object-oriented programming you typically have to make a choice between an architecture where it's easier to add new types that exhibit the same interface, or easy to add new behaviors to existing types, but doing both gets complicated. Thinking about these problems informs your design decisions. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/118064",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/40004/"
]
} |
118,295 | As a long-time C# programmer, I have recently come to learn more about the advantages of Resource Acquisition Is Initialization (RAII). In particular, I have discovered that the C# idiom: using (var dbConn = new DbConnection(connStr)) {
// do stuff with dbConn
} has the C++ equivalent: {
DbConnection dbConn(connStr);
// do stuff with dbConn
} meaning that remembering to enclose the use of resources like DbConnection in a using block is unnecessary in C++ ! This seems to a major advantage of C++. This is even more convincing when you consider a class that has an instance member of type DbConnection , for example class Foo {
DbConnection dbConn;
// ...
} In C# I would need to have Foo implement IDisposable as such: class Foo : IDisposable {
DbConnection dbConn;
public void Dispose()
{
dbConn.Dispose();
}
} and what's worse, every user of Foo would need to remember to enclose Foo in a using block, like: using (var foo = new Foo()) {
// do stuff with "foo"
} Now looking at C# and its Java roots I am wondering... did the developers of Java fully appreciate what they were giving up when they abandoned the stack in favor of the heap, thus abandoning RAII? (Similarly, did Stroustrup fully appreciate the significance of RAII?) | Now looking at C# and its Java roots I am wondering... did the developers of Java fully appreciate what they were giving up when they abandoned the stack in favor of the heap, thus abandoning RAII? (Similarly, did Stroustrup fully appreciate the significance of RAII?) I am pretty sure Gosling did not get the significance of RAII at the time he designed Java. In his interviews he often talked about reasons for leaving out generics and operator overloading, but never mentioned deterministic destructors and RAII. Funny enough, even Stroustrup wasn't aware of the importance of deterministic destructors at the time he designed them. I can't find the quote, but if you are really into it, you can find it among his interviews here: http://www.stroustrup.com/interviews.html | {
"source": [
"https://softwareengineering.stackexchange.com/questions/118295",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/4526/"
]
} |
118,376 | I'm wondering about functional or non-functional requirements. I have found lot of different definitions for those terms and I can't assign some of my requirement to proper category. I'm wondering about requirements that aren't connected with some action or have some additional conditions, for example: On the list of selected devices, device can be repeated. Database must contain at least 100 items Currency of some value must be in USD dollar. Device must have a name and power consumption value in Watts. are those requirements functional or non-functional ? | Functional requirements define what the system or application will do - specifically in the context of an external interaction (with a user, or with another system). When placing a new order, the system shall display the total cost and require confirmation from the user. That is a functional requirement; it describes a function of the system. Refer to Wikipedia: Functional Requirement for more details. Non-functional requirements are any requirements that don't describe the system's input/output behaviour. Note that we are still talking about requirements , not implementation details , so just because we're using the phrase "non-functional" doesn't mean that anything is fair game to put in that section. The most common types of non-functional requirements you'll see relate to system operation (availability, continuity, DR), performance (throughput, latency, storage capacity), and security (authentication, authorization, auditing, privacy). These are all cross-cutting concerns that impact every "feature" yet aren't really features themselves; they're more like feature metadata, helping describe not just whether the system does what it's supposed to but also how well it does it. Don't take that analogy too far, though - it's just an analogy. Non-functional requirements are not subjective or hand-wavey, contrary to what some people here seem to be suggesting. In fact, they should actually have a hard metric attached to them (i.e. response time of no more than 100 ms). NF requirements are also not implementation details or tasks like "upgrade the ORM framework" - no clue where anyone would get that idea. More details at Wikipedia: Non-Functional Requirement . To specifically address the examples in the question: On the list of selected devices, device can be repeated. Clearly a functional requirement. Describes what the system's output looks like. Database must contain at least 100 items Sounds like a business rule, so also a functional requirement. However, it seems incomplete. What is the reason for this rule? What will happen/should happen if the database contains fewer than 100 items? Currency of some value must be in USD dollar. Functional requirement, but not really a properly-stated one. A more useful wording would be: The system shall support one currency (USD). Obviously this would be amended if more than one currency needed to be supported, and then the requirement would have to include information about currency conversions and so on. Device must have a name and power consumption value in Watts. Not really any kind of requirement, this is more like a technical specification. A functional requirement would be stated as the power rating is assumed to be in Watts. If there's more than one UOM, then as with the currency, the functional requirements should have sections about unit conversions, where/how they are configured, etc. (if applicable). | {
"source": [
"https://softwareengineering.stackexchange.com/questions/118376",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/40206/"
]
} |
118,419 | In a couple of months a colleague will be moving on to a new project and I will be inheriting one of his projects. To prepare, I have already ordered Michael Feathers' Working Effectively with Legacy Code . But this books as well as most questions on legacy code I found so far are concerned with the case of inheriting code as-is. But in this case I actually have access to the original developer and we do have some time for an orderly hand-over. Some background on the piece of code I will be inheriting: It's functioning: There are no known bugs, but as performance requirements keep going up, some optimizations will become necessary in the not too distant future. Undocumented: There is pretty much zero documentation at the method and class level. What the code is supposed to do at a higher level, though, is well-understood, because I have been writing against its API (as a black-box) for years. Only higher-level integration tests: There are only integration tests testing proper interaction with other components via the API (again, black-box). Very low-level, optimized for speed: Because this code is central to an entire system of applications, a lot of it has been optimized several times over the years and is extremely low-level (one part has its own memory manager for certain structs/records). Concurrent and lock-free: While I am very familiar with concurrent and lock-free programming and have actually contributed a few pieces to this code, this adds another layer of complexity. Large codebase: This particular project is more than ten thousand lines of code, so there is no way I will be able to have everything explained to me. Written in Delphi: I'm just going to put this out there, although I don't believe the language to be germane to the question, as I believe this type of problem to be language-agnostic. I was wondering how the time until his departure would best be spent. Here are a couple of ideas: Get everything to build on my machine: Even though everything should be checked into source code control, who hasn't forgotten to check in a file once in a while, so this should probably be the first order of business. More tests: While I would like more class-level unit tests so that when I will be making changes, any bugs I introduce can be caught early on, the code as it is now is not testable (huge classes, long methods, too many mutual dependencies). What to document: I think for starters it would be best to focus documentation on those areas in the code that would otherwise be difficult to understand e.g. because of their low-level/highly optimized nature. I am afraid there are a couple of things in there that might look ugly and in need of refactoring/rewriting, but are actually optimizations that have been out in there for a good reason that I might miss (cf. Joel Spolsky, Things You Should Never Do, Part I ) How to document: I think some class diagrams of the architecture and sequence diagrams of critical functions accompanied by some prose would be best. Who to document: I was wondering what would be better, to have him write the documentation or have him explain it to me, so I can write the documentation. I am afraid, that things that are obvious to him but not me would otherwise not be covered properly. Refactoring using pair-programming: This might not be possible to do due to time constraints, but maybe I could refactor some of his code to make it more maintainable while he was still around to provide input on why things are the way they are. Please comment on and add to this. Since there isn't enough time to do all of this, I am particularly interested in how you would prioritize. Update: As the hand-over project is over I have expanded this list with my own experiences in this answer below . | As you have access to the developer you code ask:- Which modules were the most difficult to code/implement. What were the problems and how were they overcome. Which modules have generated the most bugs. Which modules have resulted in the most difficult to solve bugs. Which bits of code he is most proud off. Which bits of code he would really like to refactor, but, has not had the time. These questions will give you an insight into whats going to cause you the most problems, and, perhaps more importantly a handle on the thought processes and perspectives original developer. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/118419",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/40190/"
]
} |
118,479 | One programmer committed some work to the SVN repository, then went home. After he left, the Hudson automatic build failed. Another programmer saw this, and after looking through the code changes, detected that the problem was the absence of one library. He added this library to SVN and the next build completed successfully. Did the second programmer do the right thing or should he have just waited until the first programmer fixed the issue? | It depends to some extent on how your team usually works, but I would say that was fine. Keeping the build working saves everyone else time. It's polite for the the second programmer to drop the first an email to explain what he has done, just in case a specific version of the library is needed or there is some other complication. It's also a slightly more subtle way to point out that they had broken the build. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/118479",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/40240/"
]
} |
118,489 | My boss is currently attempting to apply some development standards to our team, so we had a meeting yesterday to discuss the standards which was mostly going well until she brought up: All DB tables will have a CreatedDate and LastUpdatedDate column, updated by triggers. At this point our team suffered an opinion schism; one half of us think that doing this on all tables is a large amount of work with little benefit (we work on fixed-budget projects so any cost comes from our company's profits); the second half believe it will help with support of the projects. I am firmly in the former camp. While I appreciate that some outside cases would cause the extra columns to improve supportability, in my opinion the amount of work that would be required to add the columns in the first place, as well as maintenance, would cause us to spend less time on more important things like Unit- or Load-Testing. Also, I'm fairly sure that these extra columns would make it more awkward to use an ORM - bearing in mind that we mainly use C# and Oracle, which isn't very ORM-happy to start with. So, my question is twofold: Am I in the right camp? I don't claim to have world-renowned database skills, so this might be a trivially easy addition with no adverse side-affects. How would you deal with a situation where a meeting about standards devolves into a slagging match? How can I really sell that this standard is not going to help us in the long term? | This is a fairly common practice, although I wouldn't say supportability is the main benefit. The real benefit for this approach is keeping an audit trail. It's also common place to have an extra column containing the username of the user who made the last update. If you're dealing with any kind of financial or sensetive data, I'm sure you've heard of things like PCI & SOX compliance. Having a comprehensive audit trail is essential in meeting those specifications.. Disclaimer: There are however, much better ways of achieveing a database audit trail > https://stackoverflow.com/questions/1051449/ideas-on-database-design-for-capturing-audit-trails | {
"source": [
"https://softwareengineering.stackexchange.com/questions/118489",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/5810/"
]
} |
118,507 | So now Eclipse has offered Xtend and JetBrains is offering Kotlin - both of which seem to be watered down versions of Scala. My question is why? I've played with Scala a bit and it's not that hard. Is this just a reaction to the inherent difficulty of the leap from imperative to functional or is there something else at work here? EDIT: Apologies. Rereading the question as I originally posted it I can see where it sounds a bit like trolling. The way I phrased the question just seemed to be the best way to ask the question. I have seen blog postings to the effect "Scala is too hard/Scala is too complex" and also "Kotlin is an attempt to do Scala but simpler". I'll leave the phrasing as it was originally but I honestly wasn't trying to troll. | IMHO from somebody programming in Java for the last 7 years and being my strongest language, I find Scala quite alien and am having a hard time getting used to it. Xtend feels more like Java and was able to write a simple application with it much quicker. Granted I didn't give myself enough time with Scala, but I certainly understand why some may be turned off by it. With that being said, people will choose a familiar hell over an unfamiliar heaven. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/118507",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/7912/"
]
} |
118,586 | Note: I am surprised that this hasn't been asked before, and if it has I could not find it in a search. I've been on tons of websites, I've read tons of articles, and I have heard tons of explanations. Most of them were good, but they were all either to broad or too complicated or just plain bad. So my question is , how does a compiler work? If this is a difficult, broad question, please tell me. But if not, please answer the question. | A compiler is a program that translates the source code for another program from a programing language into executable code. The source code is typically in a high-level programming language (e. g. Pascal, C, C++, Java, Perl, C#, etc.). The executable code may be a sequence of machine instructions that can be executed by the CPU directly, or it may be an intermediate representation that is interpreted by a virtual machine (e. g. Java byte code). In short, a compiler converts a program from a human-readable format into a machine-readable format. As to how a compiler works, that is indeed complicated. There are books and university courses on the subject. I will attempt to briefly outline the main stages of the process, but this will be a very cursory overview. Lexing - break up the text of the program into "tokens". The tokens are the "words" of the programming language, such as identifiers (keywords, variable names, function names, etc.) or operators (=, *, &, etc.). Parsing - convert the sequence of tokens into a parse tree, which is a data structure representing various language constructs: type declarations, variable declarations, function definitions, loops, conditionals, expressions, etc. Optimization - evaluate constant expressions, optimize away unused variables or unreachable code, unroll loops if possible, etc. Translate the parse tree into machine instructions (or JVM byte code). Again, I stress that this is a very brief description. Modern compilers are very smart, and, consequently, very complicated. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/118586",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/34364/"
]
} |
118,661 | I am a programmer in C and C++, although I don't stick to either language and write a mixture of the two. Sometimes having code in classes, possibly with operator overloading, or templates and the oh so great STL is obviously a better way. Sometimes use of a simple C function pointer is much much more readable and clear. So I find beauty and practicality in both languages. I don't want to get into the discussion of "If you mix them and compile with a C++ compiler, it's not a mix anymore, it's all C++" I think we all understand what I mean by mixing them. Also, I don't want to talk about C vs C++, this question is all about C++11. C++11 introduces what I think are significant changes to how C++ works, but it has introduced many special cases, exceptions and irregularities that change how different features behave in different circumstances, placing restrictions on multiple inheritance, identifiers that act as keywords, extensions of string literals, lambda function variable capturing, etc. I know that at some point in the future, when you say C++ everyone would assume C++11. Much like when you say C nowadays, you most probably mean C99. That makes me consider learning C++11. After all, if I want to continue writing code in C++, I may at some point need to start using those features simply because my colleagues have. Take C for example. After so many years, there are still many people learning and writing code in C. Why? Because the language is good. What good means is that, it follows many of the rules to create a good programming language. So besides being powerful (which easy or hard, almost all programming languages are), C is regular and has few exceptions, if any. C++11 however, I don't think so. I'm not sure that the changes introduced in C++11 are making the language better. So the question is: Why would I learn C++11? | It's simple. C++11 makes code dramatically easier, cleaner to write, and faster. nullptr is a VAST improvement over the old 0 . It's type-safe and doesn't convert when it shouldn't- unlike 0 . It's a good thing that nullptr won't convert to an int . It doesn't make sense for that to happen at all. Do you know what the C++ Committee found when they tried to consider #define NULL nullptr ? Stuff like char c = NULL; . How terrible is that? The only reason there's an exception here is because bool is considered an integral type, which is quite wrong- but that was there in C++ before, and in C. The fact that nullptr doesn't convert is good , it's great and you should love it. Or how about rvalue references and variadic templates? Faster, more generic code. That's a total win right there. How about the library improvements? Stuff like function , unique_ptr and shared_ptr are so much better than what was there before, it's impossible to argue that the C++03 way was better. #define adding_func(x, y) ((x)+(y)) Not even remotely equivalent. Macros are bad for six billion reasons. I'm not going to quote all of them here, but it's well known that macros should be avoided for pretty much all purposes that they can possibly be avoided for. What are you going to do when it's #define add_twice(x) (x + x) Oh wait, I hope you didn't increment or something on x . Which the template function version is totally immune to. I also hope that you don't appreciate namespaces , for example. Then you open yourself to a world of undefined behavior for using
external variables whose scopes are already finished. In a functional API, e.g. STL algorithms, then reference is fine. If it's a stored callback, then you need to capture by value. Whatever documentation you have on the function should clearly indicate which is necessary. The fact that the code is written in a lambda is irrelevant to the problem of referring to local variables- if you pass a regular function object, then you're going to have the exact same trouble. And it's not a trouble. At all. Because it's inherently obvious when you can and can't refer to local variables. Take C for example. After so many years, there are still many people
learning and writing code in C. Why? There are many people who don't brush their teeth in the morning. There are many murderers, and rapists, and prostitutes. And politicians. People who commit suicide. Would you argue that that therefore makes these activities good or useful? Of course not. It's a logical fallacy that just because someone did it, therefore it must be good or useful. C is still being written for three reasons: because C++ is a bitch to implement, for example on embedded or kernel mode; because legacy codebases are written in C and would cost too much to upgrade, although even that's questionable given C++'s excellent C interop; and because the people writing it don't know how to program. That's it. There's no other reason to write C. If you take C or the old style C++, you wouldn't find many exceptions. How about the pathetic C-style arrays, for a simple example? The number of people who can't get arrays and pointers straight in their head is obscene. Not to mention the fact that the C Standard library is incredibly unsafe. Your core arguments are full of logical fallacies and misunderstandings. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/118661",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/-1/"
]
} |
118,703 | I often talk to programmers who say " Don't put multiple return statements in the same method. " When I ask them to tell me the reasons why, all I get is " The coding standard says so. " or " It's confusing. " When they show me solutions with a single return statement, the code looks uglier to me. For example: if (condition)
return 42;
else
return 97; " This is ugly, you have to use a local variable! " int result;
if (condition)
result = 42;
else
result = 97;
return result; How does this 50% code bloat make the program any easier to understand? Personally, I find it harder, because the state space has just increased by another variable that could easily have been prevented. Of course, normally I would just write: return (condition) ? 42 : 97; But many programmers eschew the conditional operator and prefer the long form. Where did this notion of "one return only" come from? Is there a historical reason why this convention came about? | "Single Entry, Single Exit" was written when most programming was done in assembly language, FORTRAN, or COBOL. It has been widely misinterpreted, because modern languages do not support the practices Dijkstra was warning against. "Single Entry" meant "do not create alternate entry points for functions". In assembly language, of course, it is possible to enter a function at any instruction. FORTRAN supported multiple entries to functions with the ENTRY statement: SUBROUTINE S(X, Y)
R = SQRT(X*X + Y*Y)
C ALTERNATE ENTRY USED WHEN R IS ALREADY KNOWN
ENTRY S2(R)
...
RETURN
END
C USAGE
CALL S(3,4)
C ALTERNATE USAGE
CALL S2(5) "Single Exit" meant that a function should only return to one place: the statement immediately following the call. It did not mean that a function should only return from one place. When Structured Programming was written, it was common practice for a function to indicate an error by returning to an alternate location. FORTRAN supported this via "alternate return": C SUBROUTINE WITH ALTERNATE RETURN. THE '*' IS A PLACE HOLDER FOR THE ERROR RETURN
SUBROUTINE QSOLVE(A, B, C, X1, X2, *)
DISCR = B*B - 4*A*C
C NO SOLUTIONS, RETURN TO ERROR HANDLING LOCATION
IF DISCR .LT. 0 RETURN 1
SD = SQRT(DISCR)
DENOM = 2*A
X1 = (-B + SD) / DENOM
X2 = (-B - SD) / DENOM
RETURN
END
C USE OF ALTERNATE RETURN
CALL QSOLVE(1, 0, 1, X1, X2, *99)
C SOLUTION FOUND
...
C QSOLVE RETURNS HERE IF NO SOLUTIONS
99 PRINT 'NO SOLUTIONS' Both these techniques were highly error prone. Use of alternate entries often left some variable uninitialized. Use of alternate returns had all the problems of a GOTO statement, with the additional complication that the branch condition was not adjacent to the branch, but somewhere in the subroutine. Thanks to Alexey Romanov for finding the original paper. See http://www.cs.utexas.edu/users/EWD/ewd02xx/EWD249.PDF , page 28 (printed page number is 24). Not limited to functions. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/118703",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/3684/"
]
} |
118,740 | My company is going to hire an external developer to create some new modules and fix some bugs in our PHP software. We have never hired an external developer by the hour before. How can we protect the source code? We are not comfortable giving out source code and were thinking that everything remained under a surveillance enabled VPN which external developer would log in to. Has anyone solved this problem before? If so, how? Edit: We want the developer to see/modify the code but under surveillance and on our machine remotely. Does anybody have a similar setup? Edit 2: NDA is just a formality. IMO, even people who are in favor of NDAs know that it'll do nothing to protect their property. Edit 3: Let me clarify that we aren't worried about the developer copying an algorithm or a solution from the code. Code is coming out of his brain, so naturally he is the creator and he can create that again. But our code is built over several years with tens of developers working on it. Let's say I hire an incompetent programmer by mistake, who steals our years of work and then sells it to the competitor. That can make us lose our cutting edge. I know this is rare, but such a threat has to be taken under consideration if you're in business. I'll make points of my comments so its easy for everyone to communicate: Why NDA sucks? Take this scenario, if anyone is capable of suggesting a solution to this scenario I will consider the NDA effective. Ok, here goes:
We hire 2 external developers, one of them sells our code as it is to someone else after a year. You are no longer in touch with any of the developers, how are you supposed to find out who ripped you off? NDA does provide a purpose, but you can't rely completely on that. At least we cannot. I did not meant to offend anyone while I was posting this question, even though unintentionally I did. But again to people answering/commenting like 'I will never ever work with you' or that Men-in-black-gadget thingy: It's not about you, it's a thread about how feasible a given technical solution would be. And if anyone in this community has worked under such an environment. About 'Trust', of course we won't hire anyone we do not trust. But is that it? Can't someone be deceitful at first? We all trusted a lot of politicians to run our country, did they not fail us ever? So, I'm saying 'trust' is a complete other layer of protection like NDA, and my question was not directed to it. My question is rather directed towards technical measures we can take to avoid such a thing from happening. | Use source control. There is nothing a remote developer can do that will not be reversible. Apart from that, depending on what you mean by "protect", you should have the right contract with him, including NDA. On another note - why hire an external developer in the first place, if you are not going to trust him? Update: Now that you have clarified that by "protect" you mean "not allow to get the sensitive code", my points above about NDAs and trust remain unchanged. When it comes to source control, if you have several repositories where you have different levels of code (boilerplate - not sensitive, infrastructure - not sensitive, business logic - very sensitive etc...), you can select which repository to give access to this developer. Of course, this depends on whether you can segregate like this and still have a working application (for this to work, some repositories may require having binary dependencies checked-in - these would be compile artefacts from the sensitive repositories). The feasibility of this depends on what you want the developer to work on. Even with the scheme described above, you need to consider decompilation and reverse engineering of code (this is always possible with a determined enough attacker) so obfuscation of code/binaries may be another thing you need to consider (again, this is not perfect - with enough know how and determination, the best obfuscators can be defeated). In essence, my point is that if you want to protect a sensitive code base, you should only give access to the sensitive portions to people you trust. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/118740",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/40334/"
]
} |
118,788 | Is this an antipattern? It is an acceptable practice? try {
//do something
} catch (Exception e) {
try {
//do something in the same line, but being less ambitious
} catch (Exception ex) {
try {
//Do the minimum acceptable
} catch (Exception e1) {
//More try catches?
}
}
} | This is sometimes unavoidable, especially if your recovery code might throw an exception. Not pretty, but sometimes there are no alternatives. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/118788",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/29765/"
]
} |
118,818 | I hear a lot about keeping methods short and I've heard a lot of programmers say that using #region tags within a method is a sure sign that it is too long and should be refactored into multiple methods. However, it seems to me that there are many cases where separating code with #region tags within a method is the superior solution to refactoring into multiple methods. Suppose we have a method whose computation can be separated into three rather distinct phases. Furthermore, each of these stages is only relevant to the computation for this method, and so extracting them into new methods gains us no code reuse. What, then, are the benefits of extracting each phase into it's own method? As far as I can tell, all we gain is some readability and a separate variable scope for each phase (which will help prevent modifications of a particular phase from accidentally breaking another phase). However, both of these can be achieved without extracting each phase into its own method. Region tags allow us to collapse the code into a form which is just as readable (with the added benefit that we no longer have to leave our place in this file should we decide to expand and examine the code), and simply wrapping each phase in {} creates its own scope to work with. The benefit to doing it this way is that we don't pollute the class level scope with three methods which are actually only relevant to the inner workings of a fourth method. Immediately refactoring a long method into a series of short methods seems to me to be the code-reuse equivalent to premature optimization; you are introducing extra complexity in order to address a problem which in many cases never arises. You can always extract one of the phases into its own method later should the opportunity for code reuse arise. Thoughts? | All you should ever care about is for your code to be usable, not reusable. A monkey can transform usable code to reusable code, if there are any transformations to be done at all. The argument "I need this only here" is poor, to put it politely. The technique you're describing is often referred to as the headlines technique and is generally frowned upon. You can't test regions, but you can test true methods in isolation. Regions are comments, not syntactic elements. In the worst case the nesting of your regions and your blocks contradict each other. You should always strive to represent the semantics of your structure with the syntactic elements of the language you are using. After refactoring to methods, one no longer needs folding to read the text. One might be looking at source code in any tool that doesn't have folding, like a terminal, a mail client, a web-view for your VCS or a diff-viewer. After refactoring to methods, the resulting method is at least as good as with regions, plus it is simpler to move the individual parts around. The Single Responsibility Principle suggests that any unit should have one task and one task only. The task of the "main" method is to compose the "helper" methods to get the desired result. The "helper" methods solve a discrete, simple problem in isolation. The class containing all these methods should also only fulfill one task and only contain the methods related to that task. So the helper methods either belong into the class scope, or the code shouldn't be in the class in the first place, but at least in some global method or, better yet, should be injected . Also relevant: Jeff Atwoods's thoughts on code folding | {
"source": [
"https://softwareengineering.stackexchange.com/questions/118818",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/40370/"
]
} |
118,886 | My company needs to hire a PHP developer, but nobody has PHP knowledge in my company and we find difficult to test for PHP skills. If it were a C/Java developer I would ask him to write a quick implementation of the Game of Life, but PHP is a completely different language. I saw this test with interest: http://vladalexa.com/scripts/php/test/test_php_skill.html Anyone else has more suggestions? | Code Ask the candidate to write code Ask the candidate to read code If you do ask the candidate to write code make sure that: The code is non trivial but small You allow access to the manual and the internet If you do ask the candidate to read code make sure that: The code has some trivial errors The code has some non trivial errors The code works fine, but it can be easily optimized You can use three or more different pieces of code, start from the simpler one and only advance to the next if you see that the candidate copes with ease. Throw in some recursion, to spice things up. Resources Ask for a detailed list of PHP resources the candidate uses. Books, blogs, forums, magazines, etc. That's how my current employers found out about StackOverflow . If the candidate mentions StackOverflow or Programmers, you should NOT ask or try to find out their username. If they wanted to advertise their reputation they would have included a Careers 2.0 link on their resume. Frameworks Every PHP developer should know of the most popular PHP frameworks: Laravel Zend Framework CodeIgniter Symfony CakePHP Yii and be fluent in at least one of them. You can have a few code samples ready for each one and ask the candidate to read and explain them, after they tell you which one they are more familiar with. Debugging & Profiling I've always felt that PHP developers are lacking debugging and profiling skills (perhaps only the PHP developers I've worked with). If during the discussion you find out that the candidate actively uses xdebug , don't bother with the rest of the interview and just hire them. ;) Input sanitization This is important. You can start with a discussion on why it's important and then ask for the most common methods to achieve it. This discussion will help you on what to ask. Some hints: mysqli_real_escape_string is good magic quotes are bad PHP snafus You can find a lot of PHP snafus in this excellent discussion . If you are interviewing for a senior position you should definetaly ask on some of those. Some examples: PHP's handling of numeric values in strings: "01a4" != "001a4" // true
"01e4" == "001e4" // also true Valid PHP code : System.out.print("hello"); In PHP, a string is as good as a function pointer: $x = "foo";
function foo(){ echo "wtf"; }
$x(); # "wtf" Unit testing Need I say more? Conclusion A good PHP developer should combine a variety of skills & talents: A good understanding of HTTP A good understanding of Apache configuration (Even if you use a different web server at your company) At least a basic understanding of JavaScript A great understanding of HTML / CSS The list goes on and on. Make sure you tailor the interview to the specific needs of the job opening, you don't want to hire just a good developer but a good developer that's great at what you immediately need him / her to do. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/118886",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/40029/"
]
} |
118,962 | Sometimes I stare blankly into space or sketch ideas and write some pseudo codes on paper. Then I scratch it out and start again, then when I think I have the correct solution for the problem I begin writing the code. Is it normal to think for days without writing any code? Is this a sign that I am approaching the problem entirely wrong? It makes me nervous to not getting any tangible code written in my IDE. | Depending on the problem you are trying to solve, the design phase can take weeks and months (if not years), not just days. It takes experience to not start bashing out code immediately. Thinking about the architecture and high level design should take days if not longer - definitely something that should happen before you start writing your code. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/118962",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/3604/"
]
} |
118,971 | The company I work at recently hosted a web service in Windows Azure and announced that. Now trade online magazines say lots of meaningless stuff like "company X moves to the cloud", "company X drops desktops for the cloud", etc. Looks like there're lots of materials out there (starting with Wikipedia ) that are very lengthy and talk a lot about "services" and "low entry price" and other stuff but I've read all that and don't see how they could be helpful for a layman in drawing a line between a service in a cloud and Stack Exchange that is also a service but is run on brick-and-mortar servers in a colocation. Now from my experience with Windows Azure the real difference is the following. With a cloud the service owner rents hardware, network bandwidth and right to use the middleware (Windows 2008 that is used in Azure roles for example) on demand and also there's some maintenance assistance (like if the computer where a role is running crashes another computer is automatically found and the role is redeployed). Without a cloud the service owner will have to deal with all that on his own. Will that be the right distinction? | Yes, pretty much. With the "cloud" (as in "cloud providers"), you are renting the diskspace, bandwidth, CPU and memory owned by the provider and the means to use them from your software. They give you the infrastructure and you don't own the hardware. There are other forms of cloud computing that don't involve these providers, where you (the organisation) owns the hardware as well. In either regard, this mostly means that your software is running on a distributed network of computers, available on the Internet. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/118971",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/587/"
]
} |
119,018 | To my way of thinking, a for loop is used to iterate over a known or determinable range. String[] names = //something;
for ( int i = 0; i < names.length; i++ ) { //do stuff } which is equivalent (scoping of i aside) to : String[] names = //something;
int i = 0;
while (i < names.length )
{
// do stuff
i++;
} In other words, the for loop is simply a (highly useful) syntactic sugar for a commonly used while construct. However, I'm seeing a lot of for(;;) constructs on the web which are functionally equivalent to while(true) What is the reasoning for this? Why would the infinite for loop be preferred over the infinite while loop? // I even saw a java textbook that didn't use while loops at all! Leading to such monstrous constructs as: String input = getInput();
for( ; !inputIsValid(input) ; )
{
//redo;
} | It's a hold-over from old programming practices on the PDP-11 (yes, I said old ). It used to save a single instruction, which was useful for making loops run faster. See the following for additional information: http://www.flounder.com/exceptions.htm | {
"source": [
"https://softwareengineering.stackexchange.com/questions/119018",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/3084/"
]
} |
119,051 | I have a question that I did not find an answer for except the following answer that does not meet my requirements: "Because James Gosling didn't want to" I know that Java can have interfaces (only pure virtual functions, no attributes), but it is not the exact same thing as class definitions. | The following answer that does not meet my requirements: "Because
James Gosling didn't want to." That's the right answer, though. The language design team (Gosling, Sheridan, Naughton, later Bill Joy, Ken Arnold, etc.) decided headers caused more problems than they solved . So they designed them out, and demonstrated that they could create a perfectly useful language without the need for them. From Section 2.2.1 of the Java Language Environment white paper : Source code written in Java is simple. There is no preprocessor, no
#define and related capabilities, no typedef, and absent those features, no longer any need for header files. Instead of header
files, Java language source files provide the definitions of other
classes and their methods. Redundant definitions, keeping files in sync, conflicting definitions, hidden definitions--none of these occur in Java, because you don't have headers. If you want to see a bare class definition, you can generate one from a .java file directly--e.g. most IDEs will show you the structure of a class in a sidebar, which amounts to the same thing. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/119051",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/40283/"
]
} |
119,095 | We have a lot of programming languages. Every language is parsed and syntax checked before being translated into code so an abstract syntax tree (AST) is built. We have this abstract syntax tree, why don't we store this syntax tree instead of the source code (or next to the source code)? By using an AST instead of the source code. Every programmer in a team can serialize this tree to any language they want (with the appropriate context free grammar) and parse back to AST when they are finished. So this would eliminate the debate about the coding style questions (where to put the { and }, where to put whitespace, indentation, etc.) What are the pros and cons of this approach? | Whitespace and Comments Generally an AST does not include whitespace, line terminators, and comments. Meaningful Formatting You are correct that in most cases this is a positive (eliminates formatting holy wars), there are many cases where the formatting of the original code conveys some meaning, such as in multi-line string literals and "code paragraphs" (separating blocks of statements with an empty line). Code that can't be compiled While many parsers are very resilient to missing syntax, code with errors often results in a very weird syntax tree, which is fine and dandy up until the point where the user reloads the file. Ever make a mistake in your IDE and then all of a sudden the entire file has "squigglies"? Imagine how that would be reloaded in another language. Maybe users don't commit unparsable code, but they certainly do have a need to save locally. No two languages are perfect matches As others have pointed out, there are almost no two languages that have perfect feature parity. The closest I can think is VB and C#, or JavaScript and CoffeeScript, but even then VB has features like XML Literals that don't quite have equivalents in C#, and the JavaScript to CoffeeScript conversion might result in a lot of JavaScript literals. Personal Experience: In a software application I write, we actually need to do this, as the users are expected to enter "plain English" expressions that are converted to JS in the background. We considered only storing the JS version, but found almost no acceptable way to do so that reliably loaded and unloaded, so we ended up always storing both the user text and the JS version, as well as a flag that indicated if the "plain english" version parsed perfectly or not. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/119095",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/7431/"
]
} |
119,168 | This is a somewhat subjective quesiton but I'd love to hear feedback/opinions from either interviewers/interviewees on the topic. We split our technical interview into 4 parts. Write Code, Read & Analyse Code, Design Session & Code on the white board. For the last part what we ask interviewees to do is write a small code snippet (4-5 lines) on the whiteboard and explain as they go through it. Let me be clear the purpose is not to catch people out. We're not looking for perfect syntax. Hell it can even be pseudo-code. but the point is to give them a very simple problem and see if their brain can communicate the solution to us. By simple problems I mean "Reverse a string", "FizzBuzz" etc... Note that we always ask for an explicit language first. We're a .NET C# house. we've only said "pseudo-code" where someone has been blanking/really struggling with the code. My question is "Is it inappropriate / unreasonable to expect a programmer to write a code snippet on a whiteboard during an interview ?" | In my view, It is very appropriate. If you are wanting a job to do a particular skill, then it is entirely appropriate to be expected to demonstrate that skill at interview. The effect of this technique on the recruitment process is very noticeable. 90% of candidates fail this task. but the developers recruited are good, and the developers will be respected inside the company. If as a candidate facing this technique, first of all relax. Its about assessing you as a programmer and your thought processes. It is not about your perfect syntax. If you make a syntax error then I might play the role of a compiler and tell you that the code fails to compile on a certain line, and give you an error message, and see how you respond. Likewise if you add a ; onto a loop or an if statement that would compile, I'd play the debugger and talk you through a single step through the code. Again, its not about the mistake, its about how you would cope with the mistake, and are your thought processes good. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/119168",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/32505/"
]
} |
119,206 | Often, when I am initializing something I have to use a temporary variable, for example: file_str = "path/to/file"
file_file = open(file) or regexp_parts = ['foo', 'bar']
regexp = new RegExp( regexp_parts.join('|') ) However, I like to reduce the scope my variables to the smallest scope possible so there is less places where they can be (mis-)used. For example, I try to use for(var i ...) in C++ so the loop variable is confined to the loop body. In these initialization cases, if I am using a dynamic language, I am then often tempted to reuse the same variable in order to prevent the initial (and now useless) value from being used latter in the function. file = "path/to/file"
file = open(file)
regexp = ['...', '...']
regexp = new RegExp( regexp.join('|') ) The idea is that by reducing the number of variables in scope I reduce the chances to misuse them . However this sometimes makes the variable names look a little weird, as in the first example, where "file" refers to a "filename". I think perhaps this would be a non issue if I could use non-nested scopes begin scope1
filename = ...
begin scope2
file = open(filename)
end scope1
//use file here
//can't use filename on accident
end scope2 but I can't think of any programming language that supports this. What rules of thumb should I use in this situation? When is it best to reuse the variable? When is it best to create an extra variable? What other ways do we solve this scope problem? | Short answer: No, don't repurpose variables. The idea is that by reducing the number of variables in scope I reduce
the chances to misuse them . It sounds more like you're misusing variables so that you can reduce the number of them . Variables are cheap -- use as many as you need. Reusing variables to represent different things at different times will not be cheap in the long run; it will make your code more difficult to understand, and you or someone after you will be much more likely to create bugs trying to maintain such code. What other ways do we solve this scope problem? The best way is to choose descriptive names for your variables and never try to reuse one variable for two or more different concepts. The problem is not scope , the problem (as far as I can tell) is accidentally using a variable for the wrong thing. Descriptive names help in that respect. You can always set a variable that you no longer need to a value that's harmless or invalid. This is common in languages with manual resource management, where you free/delete/release pointers when you're done with them. After doing so, it's often a good idea to set them to nil to prevent future use of that pointer. You can do similar things with other types, like setting a loop counter to -1, but in practice it's not usually necessary. Also, don't write functions/methods that are so large that you have a hard time keeping track of all the variables you're using. If you've got dozens of variables floating around, there's a good chance that you're code is too complicated; break it down into smaller independent tasks. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/119206",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/25404/"
]
} |
119,326 | Is it fine to code PHP on Windows and host it later on a server running Linux? Can there be any problems in the migration of such a project? I would think that there really can't be any problems, especially since I am a beginner in PHP and I won't use any of the advanced functions that may be OS-specific. However, I would like to make sure since I really don't like Linux at all. | Some pointers: Filesystem case sensitivity If your file is called HelloWorld.php this: include "helloworld.php"; is legit on Windows and will work. But Linux filenames are case sensitive, you can have files called HelloWorld.php , helloworld.php , hEllOwOrlD.php in the same directory. So you should develop on Windows as if you were developing on a case sensitive filesystem: use exactly the correct filenames, directory names, extension names - .php is also different from .PHP . Directory and path separators In Windows we say: include 'classes\myClass.php'; But in Linux we would say: include 'classes/myClass.php'; PHP is smart enough to not care, both separators work in both systems. But you should be consistent and go with the slash (/) everywhere as it's also the norm on most systems. There is a nifty predefined constant DIRECTORY_SEPARATOR that translates to the correct one, if you want to go that far: include "classes" . DIRECTORY_SEPARATOR . "myClass.php"; The same goes for the path separator, which is semicolon on Windows, colon otherwise. So to be safe you should do: set_include_path(get_include_path() . PATH_SEPARATOR . $path); when in need of a path separator. Although most people think that since PHP doesn't mind which separator you use it's ok, but there is one important catch: The separators will be the system specific ones when you ask the system for directories or paths. So let's say you want to explode the include path into its parts: $includePath = get_include_path();
$pathParts = explode(";", $includePath) // Will only work on Windows
$pathParts = explode(":", $includePath) // Will work on other systems but not Windows
$pathParts = explode(PATH_SEPARATOR, $includePath) // Will work everywhere!!! File encoding and delimiter You should set your IDE to set file encoding for all your scripts to UTF-8 instead of Cp*, and the file line delimiter to Unix ( "\n" instead of "\r\n" ). In most cases it won't really matter but you should be consistent and the best way is the Unix way (which works fine on Windows but not vice versa). | {
"source": [
"https://softwareengineering.stackexchange.com/questions/119326",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/38684/"
]
} |
119,352 | I'm interested in whether ActiveRecord pattern, made famous from Ruby on Rails, encourages or discourages the use of SOLID design principles. For example, it seems to me that ActiveRecord objects both contain domain logic, and persistence logic, which is a violation of Single Responsibility. | There's some valid criticism on ActiveRecord. As always, Uncle Bob sums it up perfectly : The problem I have with Active Record is that it creates confusion about these two very different styles of programming. A database table is a data structure. It has exposed data and no behavior. But an Active Record appears to be an object. It has “hidden” data, and exposed behavior. I put the word “hidden” in quotes because the data is, in fact, not hidden. Almost all ActiveRecord derivatives export the database columns through accessors and mutators. Indeed, the Active Record is meant to be used like a data structure. On the other hand, many people put business rule methods in their Active Record classes; which makes them appear to be objects. This leads to a dilemma. On which side of the line does the Active Record really fall? Is it an object? Or is it a data structure? Wikipedia sums the criticism in a testability concern : In OOP the concept of encapsulation is often at odds with the concept of separation of concerns. Generally speaking, patterns that favor separation of concerns are more suitable to isolated unit tests while patterns that favor encapsulation have easier to use APIs. Active Record heavily favors encapsulation to the point where testing without a database is quite difficult. Specifically for the Ruby on Rails implementation, Gavin King writes (emphasis mine): At this point, most developers are thinking um, ok, so how the hell am I supposed to know what attributes a Company has by looking at my code? And how can my IDE auto-complete them? Of course, the Rails folks have a quick answer to this question Oh, just fire up your database client and look in the database!. Then, assuming that you know ActiveRecord's automagic capitalization and pluralization rules / perfectly /, you will be able to guess the names of the attributes of your own Company class, and type them in manually. Also on the Ruby on Rails implementation, John Januszczak writes (emphasis mine): PROBLEM #1: STATIC METHODS ... Some would say using Static methods simply amounts to procedural programming, and therefore is poor Object Oriented design. Others would say static methods are death to testability. PROBLEM #2: GLOBAL CONFIGURATION SETTINGS ... Therefore there is no dependency injection on the Account class in my example, and by extension, on the Account instances. As we should all know by now, looking for things is very, very bad! A few more resources on why ActiveRecord and ORM is generally considered to be an anti-pattern: Rails Domain model decoupling of Activerecord on StackOverflow, ORM is an anti-pattern Dirty Associations with ActiveRecord Issue #23: SOLID Design Principles on Practicing Ruby ActiveRecord always felt like an extremely useful anti-pattern , but I do agree that it goes against SRP and additionaly that it goes against the dependency inversion principle. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/119352",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/3081/"
]
} |
119,394 | I've been thinking about interview questions lately and I've been reflecting on bad interview experiences I've had in the past. One of particular note is where I had asked the interviewer why the team chose to use EJB 3 over Spring in their product. The interviewer pretty much tore my face off, yelling "Because Spring is not the be all and end all of Java software development, do you want this job or not?". In response to this, I told him that this probably wasn't the job for me and I promptly walked out of the interview. I was informed at the beginning of the interview that the company had high staff turnover, the product they were working was initially created in Modula-3 then ported to Perl and finally to Java. I was handed a 10 page booklet of technical questions covering Java, EJB, SQL and JDBC and I was asked questions about the technology stacks I've worked with. When prompted to ask questions, I felt it was reasonable to ask them about their technology stack and and get reasonable answers back, not to send the interviewer into flames. Question: Is it a good idea to probe on architectural choices taken in an interview? If not, why? From my own point of view, an interview is a two-way process. If the interviewers are testing me on my technical skills, I've got every right to ask them the same questions to: 1) Figure out what their mindset and attitudes towards developing software are.
2) Determine if their approach is in line with how I would approach problems of that kind. It's possible that the interviewer who got angry had poor interviewing skills and forgot that an interview is a two-way exchange. If I was asked this, I would have given a reasonable answer, but I certainly wouldn't have tried to put an interviewee in a state of meek capitulation where the head just bobs up and down with no conversation. | Personally, I find interviewing people almost as exhausting and stressful as being interviewed. But that's because I agree with you that the interview process is a two-way exchange. I don't care how good you are, I don't want to hire you if you're not going to be happy working there. That's an expensive game to play. So I want to answer any concerns you might have and show you the team and product as they are, so that you can make an informed decision. When I'm looking for a job, I want to work with someone who shares that attitude. And, even if I suspect I know the answers to questions, I will ask them just to see the reaction. Aggression is never a sign of someone comfortable with a situation. I don't lie in an interview, on either side of the desk, because then they think they're hiring someone different / going to work somewhere different. And I expect the same in return, from the person on the other side of the interview. Unfortunately, that means that occasionally I run into interviews like the one you described. Are they horrible experiences? Yes. Do I come out of there knowing exactly where the interview went wrong? Yes. But am I very sure that every horrible experience would have been considerably worse if I'd got the job or hired the wrong person? Hell yes. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/119394",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/15455/"
]
} |
119,436 | Oracle seems to license all their Java-related open source code under the GPL with a classpath exception . From what I understand, this seems to allow to combine these libraries with your own code into products that do not have to be covered by the GPL. How does this work? What are examples of how I can and cannot use these classes? Why was this new license used as opposed to the LGPL, which seems to allow for pretty much the same things, but is better established and understood? What are the differences to the LGPL? | First off, I Am Not A Lawyer. But I have studied many licenses and understand issues concerning them. Second, I know this is an old question, but I think it still is a point of confusion and concern. If it ISN'T a point of concern, it should be. Choosing a license is a big deal that you can't trivially change down the road, especially if multiple contributors are involved. (L)GPL was written with C/C++ in mind, unfortunately. It speaks of "Source Code", "Object Code", "Dynamic Linking", "Static Linking", "Compilers" and "Object Code Interpreter". So translating this for other languages that don't follow the same compilation techniques (such as Java's bytecode, Python's just in time compilation or Javascript's interpreted nature) requires some guessing and assumptions. When you are talking about the law -- i.e. thinking about eventual court cases where two parties are arguing -- not having a clear cut distinction is a BAD THING. A standard GPL-licensed piece of code is pretty straightforward in intent. Anyone who uses that code is expected to release their code to all users when they distribute or sell it. That's the GPL virus that Richard Stallman wanted to create and did clearly and cleanly. The LGPL was originally an attempt to allow a "library" that wouldn't be viral. But they still wanted the end user to be able to replace the library on their own, hence the distinction between "static" and "dynamic" linking -- the user could swap to a different dynamically linked library, so it wouldn't need to be licensed as GPL. And a static link required the user to be GPL. The license actually talks about "header files", which are clear in C/C++ but obviously not clear when you are in the Java, Python, Javascript, etc. worlds. So the L ("library") of LGPL stuff is muddy, at best. This gets to the crux of the matter. Anything unclear is BAD in the world of laws. If I'm looking at building something using either GPL or LGPL component, I want to be certain what my legal standing is in the future if I land in court. But as of today, I'm not certain because there really haven't been good court cases establishing legal precedent, only intellectual arguments on forums like this. Here is where the Classpath Exception is invaluable. It clearly states that the code under the license is (L)GPL, but anything using that code can follow whatever license they'd like. No ifs, ands, or buts. If you change the core code (e.g. fixing bugs), you do still have to release those changes as part of the GPL. But using does NOT infect you. From a business perspective, I understand why some don't want to touch GPL code with a 10' pole. The legal standing is unclear and the business might be stung a decade down the road when legal precedent is finally set. Or they might be stuck in court for years fighting to establishing the legal precedent. Regardless they just don't want to risk the cost of that battle. Adding the Classpath Exception clause eliminates the legal questions and avoids any (serious) potential court case. So, to me, the Classpath Exception is a much different than LGPL. It is a legally clean way to draw a bright line allowing non-GPL use of GPL or LGPL source code or libraries. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/119436",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/35498/"
]
} |
119,470 | A lot of students when they graduate and get their first job, feel like they don't really know how to program even though they may have been good programmers in college. What are some of the differences between programming in an academic setting and programming in the 'real world'? | In a traditional undergraduate computer science program you learn just programming. But the real world doesn't want people who are just programmers. The real world wants real software engineers. I know many job descriptions don't seem to express this distinction, which only confuses the matter. In the real world you need to be able to: Gather and analyze requirements when they aren't directly given to you Design and analyze architecture with near endless possibilities Create test plans and act on them to evaluate and improve the quality of a system Work collaboratively on a team of people with different backgrounds and experience levels Estimate and plan work even if you don't know exactly what to build Communicate effectively with stakeholders who have different needs that don't necessarily align Negotiate schedule, budget, quality, and features without disappointing stakeholders Oh yeah, and you also have to be able to write code too, though that takes, on average, only 40 - 60% of a software engineer's time. So, it's not that freshly minted computer science undergrads don't know how to program (many are in fact, very good programmers). It's that many of them don't know how to do anything else. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/119470",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/38762/"
]
} |
119,551 | I've been working by myself on a fairly large open source project for quite a while and it's nearing the point where I'd like to release it. However, I'm self-taught and I don't really know anyone who could adequately review my project. A few years ago, I had released a small bit of code which pretty much got ripped apart (in a critical sense) on the forum where I released it. Even though the code worked, the criticism was accurate but brutal. It prompted me to begin searching for best practices for everything and in the end I feel that it made me a much better developer. I've gone over everything in my project so many times trying to make it perfect that I've lost count. I believe in my project and think it has the potential to help a lot of people and I feel like I've done some cool things in interesting ways with it. Still, because I'm self-taught, I can't help but wonder what gaps exist in my self-education. The way my code was ripped apart last time isn't something I'd like to repeat. I think my two biggest fears with releasing my project that I've poured countless hours into are being absolutely embarrassed because I missed some patently obvious things because of my self-education or, worse, releasing it to the sound of crickets. Is there anyone who has been in a similar situation? I'm not afraid of constructive criticism, so long as it is constructive and not just a rant on how I screwed up. I know there is a code review site on StackExchange, but it's not really set up for large projects and I didn't feel like the community there is large enough yet to get good feedback if I were to post parts of my project piecemeal (I tried with one file). What can I do to give my project at least some measure of success without getting embarrassed or devestated in the process? | Unless the project is aimed for developers (eg: a development framework, in which case you WANT them to criticize it if it makes you learn even more), you shouldn't worry. But even then, there are many open source projects aimed for developers that are crap, yet people love them because they go to the point (think of Codeigniter, which is very poorly architected, and yet it is the most popular php framework) If it is an application for regular humans, they will probably only care about the results. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/119551",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/40682/"
]
} |
119,600 | Is there a definitive guide to writing code comments, aimed at budding developers? Ideally, it would cover when comments should (and should not) be used, and what comments should contain. This answer : Do not comment WHAT you are doing, but WHY you are doing it. The WHAT is taken care of by clean, readable and simple code with proper choice of variable names to support it. Comments show a higher level structure to the code that can't be (or is hard to) show by the code itself. comes close, but it's a little concise for inexperienced programmers (an expansion on that with several examples and corner cases would be excellent, I think). Update : In addition to the answers here, I think this answer to another question is highly relevant. | You should be aware of the greatest weakness of comments: they grow stale. That is, as code changes, developers rarely update comments to stay in sync with the code. This means, that you can never trust them and still end up reading the code. For this very reason, your code should be self documenting. You should be choosing your function and variable names in such a way that the code reads like prose. So don't document WHAT the code is doing. Self-documenting code should take care of that. Document WHY you are doing it. The WHY's are usually business rule related or architecture related and won't change often and go stale as fast at the WHATs. Document edge cases. Document exceptions. Document design decisions. And most importantly document those design decisions you had considered, but decided not to implement (and document WHY you decided against using them). | {
"source": [
"https://softwareengineering.stackexchange.com/questions/119600",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/10092/"
]
} |
119,714 | I currently am a programmer, I'm almost 16 years of age and have pretty much narrowed my careers down to something involving a Computer Science degree or Electrical Engineering degree (I know they are quite different but this question is about my friend) but my friend isn't so sure. He is very interested in maths and is very good at it and I think he would enjoy programming but he isn't willing to try it ( edit he is willing to try but has never done before). Can anyone give me an suggestions for a language or tool that he could dabble in programming (at a reasonably basic level I assume) to solve maths problems or involve some kind of maths. As I say he enjoys maths a lot but I think he would enjoy programming, the problem is I don't want him to be put off by the stuff that isn't relevant at introductory levels such as memory allocation et al. I know that is very important but the point is that I want him to learn a bit of programming with maths then hopefully if he is interested enough he can start learning programming as programming. Edit: Its not that he's completely uninterested - more that he hasn't actively explored the area before, maybe because he isn't informed about it. I wouldn't want to force him to do something he doesn't want to, I see this as more of a little push so that he can learn about programming. If he doesn't like it - fair enough, I can't control that and don't want to but if he turns out to enjoy it - this push will have been the right thing. | If you want a "math-like" language, Haskell is your best friend (for your best friend). You can easily make new functions without hassle. It is the best language recommendation I can give you for you friend. Here are some links: Try Haskell - An online Haskell compiler and tutorial. Learn You A Haskell For Great Good! - This is how I learned Haskell. How to write a Haskell program - The title is somewhat misleading, it is geared to get new programmers started. Programming in Haskell (Book) - Never read it, but it looks pretty good! Discrete Mathematics Using a Computer - THIS IS A GREAT BOOK FOR YOUR BUDDIES SITUATION 11 Reasons To Use Haskell - From a comment! Mathematica Wolfram's Mathematica is another interest he may have. Mathematica is a computational software program used in scientific, engineering, and mathematical fields and other areas of technical computing. It was conceived by Stephen Wolfram and is developed by Wolfram Research of Champaign, Illinois. Although it is expensive, it is worth it. Here are some links: Mathematica "Home Page" - Probably where you should start. Documentation - Learn and use Mathematica. Hands-On Mathematica Tutorial - Great way to get started. Algorithms Algorithms are important for any program, but your buddy should start with these when he gets comfortable with a language. Here are some more links: Algorithms by S. Dasgupta, C.H. Papadimitriou, and U.V. Vazirani - This is an online publication of a great book! LaTeX/Algorithms and Pseudocode - Looks promising! List of Algorithms (Wikipedia) - This is one of my favorite Wikipedia entries. Hope this helps! If you have any questions or feedback feel free to comment! By the way, all of these links are to free resources. If you want a printed book, I have a few recommendations, just leave a comment! | {
"source": [
"https://softwareengineering.stackexchange.com/questions/119714",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/8633/"
]
} |
119,778 | I still looking best practice for domain model validation. Is that good to put the validation in constructor of domain model ? my domain model validation example as follows: public class Order
{
private readonly List<OrderLine> _lineItems;
public virtual Customer Customer { get; private set; }
public virtual DateTime OrderDate { get; private set; }
public virtual decimal OrderTotal { get; private set; }
public Order (Customer customer)
{
if (customer == null)
throw new ArgumentException("Customer name must be defined");
Customer = customer;
OrderDate = DateTime.Now;
_lineItems = new List<LineItem>();
}
public void AddOderLine //....
public IEnumerable<OrderLine> AddOderLine { get {return _lineItems;} }
}
public class OrderLine
{
public virtual Order Order { get; set; }
public virtual Product Product { get; set; }
public virtual int Quantity { get; set; }
public virtual decimal UnitPrice { get; set; }
public OrderLine(Order order, int quantity, Product product)
{
if (order == null)
throw new ArgumentException("Order name must be defined");
if (quantity <= 0)
throw new ArgumentException("Quantity must be greater than zero");
if (product == null)
throw new ArgumentException("Product name must be defined");
Order = order;
Quantity = quantity;
Product = product;
}
} Thanks for all of your suggestion. | There's an interesting article by Martin Fowler on that subject that highlights an aspect most people (including me) tend to overlook: But one thing that I think constantly trips people up is when they
think object validity on a context independent way such as an isValid
method implies. I think it's much more useful to think of validation as something
that's bound to a context - typically an action that you want to do.
Is this order valid to be filled, is this customer valid to check in
to the hotel. So rather than have methods like isValid have methods
like isValidForCheckIn. From this follows that the constructor should not do validation, except perhaps some very basic sanity checking shared by all contexts. Again from the article: In About Face Alan Cooper advocated that we shouldn't let our ideas of
valid states prevent a user from entering (and saving) incomplete
information. I was reminded by this a few days ago when reading a
draft of a book that Jimmy Nilsson is working on. He stated a
principle that you should always be able to save an object, even if it
has errors in it. While I'm not convinced that this should be an
absolute rule, I do think people tend to prevent saving more than they
ought. Thinking about the context for validation may help prevent
that. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/119778",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/21968/"
]
} |
119,782 | I find git hard to understand as I could not find the meaning of the words used for the actions. I have checked the dictionary for the meaning of 'stage' and none of the meanings were related to source control concepts. What does 'stage' mean in the context of git? | To stage a file is simply to prepare it finely for a commit. Git, with its index allows you to commit only certain parts of the changes you've done since the last commit. Say you're working on two features - one is finished, and one still needs some work done. You'd like to make a commit and go home (5 o'clock, finally!) but wouldn't like to commit the parts of the second feature, which is not done yet. You stage the parts you know belong to the first feature, and commit. Now your commit is your project with the first feature done, while the second is still in work-in-progress in your working directory. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/119782",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/37238/"
]
} |
119,784 | Sometimes I hear people saying something like "All committed code must be working". In some articles people even write descriptions how to create svn or git hooks that compile and test code before commit. In my company we usually create one branch for a feature, and one programmer usually works in this branch. I often (1 per 100, I think and as I think with good reason) do non-compilable commits. It seems to me that requirement of "always compilable/stable" commits conflicts with the idea of frequent commits. A programmer would rather make one commit in a week than test the whole project's stability/compilability ten times a day. For only compilable code I use tags and some selected branches (trunk etc). I see these reasons to commit not fully working or not compilable code: If I develop a new feature, it is hard to make it work writing a few lines of code. If I am editing a feature, it is again sometimes hard to keep code working every time. If I am changing some function's prototype or interface, I would also make hundreds of changes, not mechanical changes, but intellectual. Sometimes one of them could cause me to carry out hundreds of commits (but if I want all commits to be stable I should commit 1 time instead of 100). In all these cases to make stable commits I would make commits containing many-many-many changes and it will be very-very-very hard to find out "What happened in this commit?". Another aspect of this problem is that compiling code gives no guarantee of proper working. So is it good idea to require every commit to be stable/compilable? Does it depends on branching model or CVS? In your company, is it forbidden to make non compilable commits? Is it (and why) a bad idea to use only selected branches (including trunk) and tags for stable versions? | Q: Is it good idea to require to commit only working code? A: It depends You should have at least 3 kinds of branches in your repository (although sometimes 2 of those are the same). Production : The code residing in a branch destined for production should always work (as in compiles and passes all the tests). Committing stuff that doesn't pass the tests is a grave offense and steps should be taken so that it never happens (daily, hourly, etc. builds with tests). Code should compile and pass tests . Latest : This branch holds the latest stable code. This is where every new feature should start from and end in. This is also the branch that holds the code that will eventually make its way into the production branch. Code should compile and pass tests . Feature : Anything can happen in feature branches. You could be in the middle of a big refactoring or writing a lot of new code. You don't want to lose any of that work so you commit often, even stuff that doesn't work. However , if you're working on a feature with multiple people you might want to set up some ground rules such as code should compile or decide that after a certain point, the branch should be stabilized and tests should pass thereafter. Code does not have to compile or pass the tests (unless decided by the team working on the feature). For the sake of simplicity, I used commit in my text above, but this also applies to the commit/push combo of distributed version control systems. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/119784",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/40787/"
]
} |
119,827 | I got this question in an interview and I was not able to solve it. You have a circular road, with N number of gas stations. You know the amount of gas that each station has. You know the amount of gas you need to GO from one station to the next one. Your car starts with 0. You can only drive clockwise. The question is: Create an algorithm, to know from which gas station you must start driving so that you complete a full circle. As an exercise to me, I would translate the algorithm to C#. | (Update: now allows a gas tank size maximum) You can solve this in linear time as follows: void FindStartingPoint(int[] gasOnStation, int[] gasDrivingCosts, int gasTankSize)
{
// Assume gasOnStation.length == gasDrivingCosts.length
int n = gasOnStation.length;
// Make a round, without actually caring how much gas we have.
int minI = 0;
int minEndValue = 0;
int gasValue = 0;
for (int i = 0; i < n; i++)
{
if (gasValue < minEndValue)
{
minI = i;
minEndValue = gasValue;
}
gasValue = gasValue + gasOnStation[i] - gasDrivingCosts[i];
}
if (gasValue < 0)
{
Console.WriteLine("Instance does not have a solution: not enough fuel to make a round.");
}
else
{
// Try a round.
int gas = DoLeg(0, minI, gasTankSize);
if (gas < 0)
{
Console.WriteLine("Instance does not have a solution: our tank size is holding us back.");
return;
}
for (int i = (minI + 1) % n; i != minI; i = (i + 1) % n)
{
gas = DoLeg(gas, i, gasTankSize);
if (gas < 0)
{
Console.WriteLine("Instance does not have a solution: our tank size is holding us back.");
return;
}
}
Console.WriteLine("Start at station: " + minI);
}
}
int DoLeg(int gas, int i, int gasTankSize)
{
gas += gasOnStation[i];
if (gas > gasTankSize) gas = gasTankSize;
gas -= gasDrivingCosts[i];
return gas;
} First, we look at the case where we don't have a gas tank with a maximum. Essentially, in the first for-loop, we just drive the circle around, not caring if our fuel tank has negative fuel or not. The point of this is that no matter where you start, the difference between how much there is in your fuel tank at the start (0) and at the end is the same. Therefore, if we end up with less fuel than we started (so less than 0), this will happen no matter where we start, and so we can't go a full circle. If we end up with at least as much fuel as we started after going a full circle, then we search for the moment our fuel tank was at its lowest point (which is always just as we get to a gas station). If we start at this point, we will never end up with less fuel than at this point (because it is the lowest point and because we don't lose fuel if we drive a circle). Therefore, this point is a valid solution, and in particular, there always is such a point. Now we'll look at the version where our gas tank can hold only so much gas. Suppose our initial test (described above) we found out it is not impossible to go the entire circle. Suppose that we start at gas station i , we tank at gas station j , but our gas tank ends up being full, so we miss out on some extra gas the station has available. Then, before we get to station k , we end up not having enough fuel, because of the gas we missed out on. We claim that in this scenario, this will end up happening no matter where you start. Suppose we start at station l . If l is between j and k , then we either stop (long) before we can get to station k because we started at a bad station, or we'll always have at most the amount of fuel that we had when we started at i when we try to get to k , because we passed through the same stations (and our tank was full when we passed j ). Either case is bad. If l is not between j and k , then we either stop (long) before we get to j , or we arrive at j with at most a full tank, which means that we won't make it to k either. Either case is bad. This means that if we make a round starting at a lowest point just like in the case with the infinitely large gas tank, then we either succeed, or we fail because our gas tank was too small, but that means that we will fail no matter which station we pick first, which means that the instance has no solution. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/119827",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/40809/"
]
} |
119,913 | Doing a google search for "pythonic" reveals a wide range of interpretations. The wikipedia page says: A common neologism in the Python community is pythonic, which can have a wide range of meanings related to program style. To say that code is pythonic is to say that it uses Python idioms well, that it is natural or shows fluency in the language. Likewise, to say of an interface or language feature that it is pythonic is to say that it works well with Python idioms, that its use meshes well with the rest of the language. It also discusses the term "unpythonic": In contrast, a mark of unpythonic code is that it attempts to write C++ (or Lisp, Perl, or Java) code in Python—that is, provides a rough transcription rather than an idiomatic translation of forms from another language. The concept of pythonicity is tightly bound to Python's minimalist philosophy of readability and avoiding the "there's more than one way to do it" approach. Unreadable code or incomprehensible idioms are unpythonic. What does the term "pythonic" mean? How do I learn to effectively apply it in practice? | I've found that a most people have their own interpretations of what being "Pythonic" really means. From Wikipedia: A common neologism in the Python community is pythonic, which can have
a wide range of meanings related to program style. To say that code is
pythonic is to say that it uses Python idioms well, that it is natural
or shows fluency in the language. Likewise, to say of an interface or
language feature that it is pythonic is to say that it works well with
Python idioms, that its use meshes well with the rest of the language. In contrast, a mark of unpythonic code is that it attempts to write
C++ (or Lisp, Perl, or Java) code in Python—that is, provides a rough
transcription rather than an idiomatic translation of forms from
another language. The concept of pythonicity is tightly bound to
Python's minimalist philosophy of readability and avoiding the
"there's more than one way to do it" approach. Unreadable code or
incomprehensible idioms are unpythonic. I've found that more times than not, more "pythonic" examples are actually derived from people trying to be clever with Python idioms and (again, more times than not) rendering their code virtually unreadable (which is not Pythonic). As long as you're sticking to Python's idioms and avoiding trying to use C++ (or other language) styles in Python, then you're being Pythonic. As pointed out by WorldEngineer, PEP8 is a good standard to follow (and if you use VIM, there are plugins available for PEP8 linting). Really though, at the end of the day, if your solution works and isn't absolutely horribly un-maintainable and slow, who cares? More times than not, your job is to get a task done, not write the most elegant, pythonic code possible. Another side note (just my opinion, feel free to downvote because of it ;)): I've also found the Python community to be filled with a ton of ego (not that most communities aren't , it's just a little more prevalent in communities such as C and Python). So, combining the ego with misconstrued interpretations of being "pythonic" will tend to yield a whole lot of baseless negativity. Take what you read from others with a grain of salt. Stick to official standards and documentation and you'll be fine. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/119913",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/-1/"
]
} |
120,019 | I'm trying to understand the difference between procedural languages like C and object-oriented languages like C++. I've never used C++, but I've been discussing with my friends on how to differentiate the two. I've been told C++ has object-oriented concepts as well as public and private modes for definition of variables: things C does not have. I've never had to use these for while developing programs in Visual Basic.NET: what are the benefits of these? I've also been told that if a variable is public, it can be accessed anywhere, but it's not clear how that's different from a global variable in a language like C. It's also not clear how a private variable differs from a local variable. Another thing I've heard is that, for security reasons, if a function needs to be accessed it should be inherited first. The use-case is that an administrator should only have as much rights as they need and not everything, but it seems a conditional would work as well: if ( login == "admin") {
// invoke the function
} Why is this not ideal? Given that there seems to be a procedural way to do everything object-oriented, why should I care about object-oriented programming? | All answers so far have focused on the topic of your question as stated, which is "what is the difference between c and c++". In reality, it sounds like you know what difference is, you just don't understand why you would need that difference. So then, other answers attempted to explain OO and encapsulation. I wanted to chime in with yet another answer, because based on the details of your question, I believe you need to take several steps back. You don't understand the purpose of C++ or OO, because to you, it seems that your application simply needs to store data. This data is stored in variables.
"Why would I want to make a variable inaccessible? Now I can't access it anymore! By making everything public, or better yet global, I can read data from anywhere and there are no problems." - And you are right, based on the scale of the projects you are currently writing, there are probably not that many problems (or there are, but you just haven't become aware of them yet). I think the fundamental question you really need to have answered is: "Why would I ever want to hide data? If I do that, I can't work with it!"
And this is why: Let's say you start a new project, you open your text editor and you start writing functions. Every time you need to store something (to remember it for later), you create a variable. To make things simpler, you make your variables global.
Your first version of your app runs great. Now you start adding more features. You have more functions, certain data you stored from before needs to be read from your new code. Other variables need to be modified. You keep writing more functions.
What you may have noticed (or, if not, you absolutely will notice in the future) is, as your code gets bigger, it takes you longer and longer to add the next feature. And as your code gets bigger, it becomes harder and harder to add features without breaking something that used to work.
Why?
Because you need to remember what all your global variables are storing and you need to remember where all of them are being modified. And you need to remember which function is okay to call in what exact order and if you call them in a different order, you might get errors because your global variables aren't quite valid yet.
Have you ever run into this? How big are your typical projects (lines of code)?
Now imaging a project 5000 to 50000 times as big as yours. Also, there are multiple people working in it. How can everyone on the team remember (or even be aware of) what all those variables are doing? What I described above is an example of perfectly coupled code. And since the dawn of time (assuming time started Jan 1, 1970), human kind has been looking for ways to avoid these problems. The way you avoid them is by splitting up your code into systems, subsystems and components and limiting how many functions have access to any piece of data.
If I have 5 integers and a string that represent some kind of state, would it be easier for me to work with this state if only 5 functions set/get the values? or if 100 functions set/get these same values?
Even without OO languages (i.e. C), people have been working hard on isolating data from other data and creating clean separation boundaries between different parts of the code. When the project gets to a certain size, ease of programming becomes not, "can I access variable X from function Y", but "how do I make sure ONLY functions A, B, C and no one else is touching variable X". This is why OO concepts have been introduced and this is why they are so powerful. They allow you to hide your data from yourself and you want to do it on purpose, because the less code that sees that data, the less chance there is, that when you add the next feature, you will break something. This is the main purpose for the concepts of encapsulation and OO programming. They allow you to break our systems/subsystems down into even more granular boxes, to a point where, no matter how big the overall project is, a given set of variables may only be accessed by 50-200 lines of code and that's it! There's obviously much more to OO programming, but, in essence, this is why C++ gives you options of declaring data/functions as private, protected or public. The second greatest idea in OO is the concept of abstraction layers. Although procedural languages can also have abstractions, in C, a programmer must make a conscious effort to create such layers, but in C++, when you declare a class, you automatically create an abstraction layer (it's still up to you whether or not this abstraction will add or remove value). You should read/research more about abstraction layers and if you have more questions, I'm sure this forum will be more than happy to answer those as well. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/120019",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/35064/"
]
} |
120,126 | What were the historical forces at work, the tradeoffs to make, in deciding to use groups of eight bits as the fundamental unit? There were machines, once upon a time, using other word sizes. But today, for non-eight-bitness, you must look to museum pieces, specialized chips for embedded applications, and DSPs . How did the byte evolve out of the chaos and creativity of the early days of computer design? I can imagine that fewer bits would be ineffective for handling enough data to make computing feasible, while too many would have lead to expensive hardware. Were other influences in play? Why did these forces balance out to eight bits? (BTW, if I could time travel, I'd go back to when the byte was declared to be 8 bits, and convince everyone to make it 12 bits, bribing them with some early 21st century trinkets.) | A lot of really early work was done with 5-bit baudot codes, but those quickly became quite limiting (only 32 possible characters, so basically only upper-case letters, and a few punctuation marks, but not enough "space" for digits). From there, quite a few machines went to 6-bit characters. This was still pretty inadequate though -- if you wanted upper- and lower-case (English) letters and digits, that left only two more characters for punctuation, so most still had only one case of letters in a character set. ASCII defined a 7-bit character set. That was "good enough" for a lot of uses for a long time, and has formed the basis of most newer character sets as well (ISO 646, ISO 8859, Unicode, ISO 10646, etc.) Binary computers motivate designers to making sizes powers of two. Since the "standard" character set required 7 bits anyway, it wasn't much of a stretch to add one more bit to get a power of 2 (and by then, storage was becoming enough cheaper that "wasting" a bit for most characters was more acceptable as well). Since then, character sets have moved to 16 and 32 bits, but most mainstream computers are largely based on the original IBM PC. Then again, enough of the market is sufficiently satisfied with 8-bit characters that even if the PC hadn't come to its current level of dominance, I'm not sure everybody would do everything with larger characters anyway. I should also add that the market has changed quite a bit. In the current market, the character size is defined less by the hardware than the software. Windows, Java, etc., moved to 16-bit characters long ago. Now, the hindrance in supporting 16- or 32-bit characters is only minimally from the difficulties inherent in 16- or 32-bit characters themselves, and largely from the difficulty of supporting i18n in general. In ASCII (for example) detecting whether a letter is upper or lower case, or converting between the two, is incredibly trivial. In full Unicode/ISO 10646, it's basically indescribably complex (to the point that the standards don't even try -- they give tables, not descriptions). Then you add in the fact that for some languages/character sets, even the basic idea of upper/lower case doesn't apply. Then you add in the fact that even displaying characters in some of those is much more complex still. That's all sufficiently complex that the vast majority of software doesn't even try. The situation is slowly improving, but slowly is the operative word. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/120126",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/4280/"
]
} |
120,169 | I know that obj.func1().func2() is called method chaining, but what is the technical term for: func1(func2(), func3()); Where return of a function is used as an argument to another. | I don't think it's function composition. Function composition means taking two or more functions and turning them into a new function, like f . g . h in Haskell. Note that no function is called at this point. Personally, I would refer to constructs like func1(func2(), func3()) as "nested function calls". | {
"source": [
"https://softwareengineering.stackexchange.com/questions/120169",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/12041/"
]
} |
120,178 | What's the difference between MariaDB and MySQL? I'm not very familiar with both. I'm primarily a front end developer for the most part. Are they syntactically similar? Where do these two query languages differ? Wikipedia only mentions the difference between licensing: MariaDB is a community-developed branch of the MySQL database, the
impetus being the community maintenance of its free status under GPL,
as opposed to any uncertainty of MySQL license status under its
current ownership by Oracle. | MariaDB is a backward compatible, binary drop-in replacement of MySQL . What this means is: Data and table definition files (.frm) files are binary compatible. All client APIs, protocols and structs are identical. All filenames, binaries, paths, ports, sockets, and etc... should be the same. All MySQL connectors work unchanged with MariaDB. The mysql-client package also works with MariaDB server. In most common practical scenarios, MariaDB version 5.x.y will work exactly like MySQL 5.x.y, MariaDB follows the version of MySQL, i.e. it's version number is used to indicate with which MySQL version it's compatible. MariaDB originated as a fork of MySQL by Michael "Monty" Widenius, one of the original developers of MySQL and co-founder of MySQL Ab. The MariaDB Foundation acts as the custodian of MariaDB. The main motivation behind MariaDB was to provide a floss version of MySQL, in case Oracle goes all corporate with MySQL. It's worth noting that Monty was vocal against MySQL acquisition (via Sun's acquisition) by Oracle. Although MariaDB is supposed to be compatible with MySQL, for one reason or the other there are quite a few compatibility issues and different features : MariaDB includes all popular open source engines, MariaDB claims several speed improvements over MySQL, and there are a few new floss extensions that MySQL lacks Finally, the name comes from Monty's daughter Maria (the other one being My), as MySQL is now a registered trademark of Oracle Corporation. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/120178",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/28477/"
]
} |
120,202 | I run across this problem a lot. For example, I currently write a read function and a write function, and they both check if buf is a NULL pointer and that the mode variable is within certain boundaries. This is code duplication. This can be solved by moving it into its own function. But should I? This will be a pretty anemic function (doesn't do much), rather localized (so not general purpose), and doesn't stand well on its own (can't figure out what you need it for unless you see where it is used). Another option is to use a macro, but I want to talk about functions in this post. So, should you use a function for something like this?
What are the pros and cons? | This is a great use of functions. This will be a pretty anemic function (doesn't do much)... That's good. Functions should only do one thing. rather localized... In an OO language, make it private. (so not general purpose) If it handles more than one case, it is general purpose. Moreover, generalising isn't the only use of functions. They are indeed there to (1) save you writing the same code more than once, but also (2) to break code up into smaller chunks to make it more readable. It this case it is achieving both (1) and (2). However, even if your function was being called from just one place, it might still help with (2). and doesn't stand well on its own (can't figure out what you need it for unless you see where it is used). Come up with a good name for it, and it stands fine on its own. "ValidateFileParameters" or something. Now stands fine on its own. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/120202",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/92/"
]
} |
120,213 | Most of the newcomers in programming start with basic projects to start with programming. Most of the C++ progammers spend some time with puzzles and contests but this is not always helpful. Sometimes you've to spend some time on real projects. Starting your own open source project could be a problem in self-learning for newbie cause of lack of mentors and peers who can't look at your code and give suggestions. Open source projects can solve this problem, some projects could be best suited for new programmers. Besides everybody is newbie at some point. So i'll try and make this question a bit from beginners perspective. I tried few questions on stack overflow before asking this like How do i join & Bare minimum you need and how to get involved with open source and what level of programming etc. But this is not helping me when it comes to self-evaluating with skills. How to find that out ? How can i check what it takes to join open source project and am i really that comfortable with huge source code etc. My question is when to consider yourself comfortable joining open source programming ? I mean how will you test yourself that you're ready to take burden of big/small projects of open source ? how will you test yourself to see if you could work with version control/other programmers/tight schedule etc ? | when to consider yourself comfortable
joining open source programming ? The best answer to that question, in my opinion, is " When you think you can bring something to the project ". You're using an application / library and something is missing, or you found a bug ? Report it, try to correct it, send a patch ; et voila ;-) Maybe your patch will be accepted, if it's OK ; keep doing that a couple of times : correcting bugs is something (even bugs you didn't report -- see the bugtracker of the project you choose) that will allow you to know the project. And, after a while, maybe you'll get commit rights to the projects ;-) It's not necessarily a question skills or whatever : you can participate in a big open source project without having to modifiy the core of the project or whatever : even small patches (like translation, minor modifications to the UI, minor bug corrections, ...) are usefull to the project, and they won't require you to be a rock start ; instead, they'll be a perfect start either for you to know the project, and others to see that you are doing well. About version control / other programmers / tight schedule : I'm guessing that, when you have (professionnaly speaking) worked for a couple of years, you are more than ready for all that ; open source projects are maybe even a bit more forgiving about that, in some ways -- for instance, there might be less presure than when you have a client on your back ^^ As a final note : whatever you do, if done well, will be useful : what matters is that you do it for the project, and not just "to do open source" ! | {
"source": [
"https://softwareengineering.stackexchange.com/questions/120213",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/41045/"
]
} |
120,230 | I'm just getting started with functional programming and I'm wondering about the correct way to comment my code. It seems a little redundant to comment a short function as the names and signature already should tell you everything you need to know. Commenting larger functions also seems a little redundant since they are generally comprised of smaller self-descriptive functions. What is the correct way to comment a functional program? Should I use the same approach as in iterative programming? | The function name should say what you're doing. The implementation will tell you how you're doing it. Use comments to explain why you're doing it. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/120230",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/29270/"
]
} |
120,244 | I am putting together some guidelines for code reviews. We do not have one formal process yet and are trying to formalize it. And our team is geographically distributed. We are using TFS for source control (we used it for tasks/bug tracking/project management as well, but we migrated that to JIRA ) with Visual Studio 2008 for development. What are the things you look for when doing a code review? These are the things I came up with Enforce FxCop rules (we are a Microsoft shop) Check for performance (any tools ?) and security (thinking about using OWASP - code crawler ) and thread safety Adhere to naming conventions The code should cover edge cases and boundaries conditions Should handle exceptions correctly (do not swallow exceptions) Check if the functionality is duplicated elsewhere A method body should be small (20-30 lines), and methods should do one
thing and one thing only (no side
effects and avoid temporal coupling -) Do not pass/return nulls in methods Avoid dead code Document public and protected methods/properties/variables What other things should we generally look out for? I am trying to see if we can quantify the review process (it would produce identical output when reviewed by different persons) Example: Saying "the method body should be no longer than 20-30 lines of code" as opposed to saying "the method body should be small". Or is code review very subjective (and would differ from one reviewer to another)? The objective is to have a marking system (say -1 point for each FxCop rule violation, -2 points for not following naming conventions, 2 points for refactoring, etc.) so that developers would be more careful when they check in their code. This way, we can identify developers who are consistently writing good/bad code. The goal is to have the reviewer spend about 30 minutes max, to do a review (I know this is subjective, considering the fact that the changeset/revision might include multiple files/huge changes to the existing architecture, etc., but you get the general idea, the reviewer should not spend days reviewing someone's code). What other objective/quantifiable system do you follow to identify good/bad code written by developers? Book reference: Clean Code: A handbook of agile software craftmanship by Robert Martin . | Grading individuals in a review is counter to most successful systems I've worked with, maybe all. But the goal I've been trying to reach for more than 20 years is fewer bugs and increased productivity per-engineer-hour. If grading individuals is a goal, I suppose reviews could be used. I've never seen a situation where it was required, as a worker or as a leader. Some objective study (Fagan, etc.) and a lot of popular wisdom suggests that peer relationships facilitate code reviews aimed at reducing bugs and increasing productivity. Working managers may participate as workers, but not as managers. Points of discussion are noted, changes to satisfy reviewers are generally a good thing but not required. Hence the peer relationship. Any automated tools that can be accepted without further analysis or judgment are good - lint in C, C++, Java. Regular compilation. Compilers are REALLY good at findng compiler bugs. Documenting deviations in automated checks sounds like a subtle indictment of the automated checks. Code directives (like Java does) that allow deviations are pretty dangerous, IMHO. Great for debugging, to allow you to get the heart of the matter, quickly. Not so good to find in a poorly documented, 50,000 non-comment-line block of code you've become responsible for. Some rules are stupid but easy to enforce; defaults for every switch statement even when they're unreachable, for example. Then it's just a check box, and you don't have to spend time and money testing with values which don't match anything. If you have rules , you'll have foolishness , they are inextricably linked . Any rule's benefit should be worth the foolishness it costs, and that relationship should be checked at regular intervals. On the other hand, "It runs" is no virtue before review, or defense in review. If development followed the waterfall model , you'd like to do the review when coding is 85% complete, before complicated errors were found and worked out, because review is a cheaper way to find them. Since real life isn't the waterfall model, when to review is somewhat of an art and amounts to a social norm. People who will actually read your code and look for problems in it are solid gold. Management that supports this in an on-going way is a pearl above price. Reviews should be like checkins- early and often . I've found these things beneficial: 1) No style wars . Where open curly braces go should only be subject to a consistency check in a given file. All the same. That's fine then. Ditto indentation depth**s and **tab widths . Most organizations discover they need a common standard for tab, which is used as a large space. 2) `Ragged looking text
that doesn't line up is hard to read for content.` BTW, K&R indented five (FIVE) spaces, so appeals to authority are worthless. Just be consistent. 3) A line-numbered, unchanging, publicly available copy of the file to be reviewed should be pointed to for 72 hours or more before the review. 4) No design-on-the-fly. If there's a problem, or an issue, note its location, and keep moving. 5) Testing that goes through all paths in the development environment is a very, very, very, good idea. Testing that requires massive external data, hardware resources, use of the customer's site, etc, etc. is testing that costs a fortune and won't be thorough. 6) A non- ASCII file format is acceptable if creation, display, edit, etc., tools exist or are created early in development. This is a personal bias of mine, but in a world where the dominant OS can't get out of its own way with less than 1 gigabyte of RAM, I can't understand why files less than, say, 10 megabytes should be anything other than ASCII or some other commercially supported format. There are standards for graphics, sound, movies, executable, and tools that go with them. There is no excuse for a file containing a binary representation of some number of objects. For maintenance, refactoring or development of released code, one group of co-workers I had used review by one other person, sitting at a display and looking at a diff of old and new , as a gateway to branch check-in. I liked it, it was cheap, fast, relatively easy to do. Walk-throughs for people who haven't read the code in advance can be educational for all but seldom improve the developer's code. If you're geographically distributed, looking at diffs on a screen while talking with someone else looking at the same would be relatively easy. That covers two people looking at changes. For a larger group who have read the code in question, multiple sites isn't a lot harder than all in one room. Multiple rooms linked by shared computer screens and squak boxes work very well, IMHO. The more sites, the more meeting management is needed. A manager as facilitator can earn their keep here. Remember to keep polling the sites you're not at. At one point, the same organization had automated unit testing which was used as regression testing. That was really nice. Of course we then changed platforms and automated test got left behind. Review is better, as the Agile Manifesto notes, relationships are more important than process or tools . But once you've got review, automated unit tests/regression tests are the next most important help in creating good software. If you can base the tests on requirements , well, like the lady says in "When Harry Met Sally" , I'll have what she's having! All reviews need to have a parking lot to capture requirements and design issues at the level above coding. Once something is recognized as belonging in the parking lot, discussion should stop in the review. Sometimes I think code review should be like schematic reviews in hardware design- completely public, thorough, tutorial, the end of a process, a gateway after which it gets built and tested. But schematic reviews are heavyweight because changing physical objects is expensive. Architecture, interface and documentation reviews for software should probably be heavyweight. Code is more fluid. Code review should be lighter weight. In a lot of ways, I think technology is as much about culture and expectation as it is about a specific tool. Think of all the " Swiss Family Robinson "/ Flintstones / McGyver improvisations that delight the heart and challenge the mind. We want our stuff to work . There isn't a single path to that, any more than there was "intelligence" which could somehow be abstracted and automated by 1960s AI programs. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/120244",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/-1/"
]
} |
120,305 | We have been slowly replacing batch command files (windows .bat) which were simply jarring up the classes compiled in the developers IDE, with more comprehensive Ant builds (i.e. get from CVS, clean compile, jar, archive, email, etc.) I've spent a lot of time learning (and debugging issues) with Ant, so I'm most comfortable using it for these tasks. But I wonder if Ant is still in as wide usage as it was when I first started learning, or whether "the world has moved on" to something newer (and maybe slicker). (I've started to see more Maven build stuff distributed, which I've never used, for example.) The practical import of this question, is whether I push new developers to learn Ant , or whether they should be learning something else for builds? I'm never too on top of the trends, so it would be great to hear from other Java developers what they think is the best build tool, and what they think new developers should be learning. | I agree with others here that Maven seems to have taken over most significant projects that I've looked at. While Ant is highly flexible, the build file is not standardized, so when you move to a new project or company, the targets are named differently, the file is structured differently, the inter-target dependencies may or may not be established, etc. With Maven, you also get the benefit of not having to carry binary dependencies (I'm talking about jars) around in your SCM system. Lots of other great Java tools know how to read a Maven POM file (the benefit of standardization), so tools like IDEs can set up a Maven project very quickly, and build tools like Jenkins can easily execute Maven builds. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/120305",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/25612/"
]
} |
120,321 | I had a very stimulating and interessting discussion with a colleague about ORM and its pros and cons. In my opinion, an ORM is useful only in the rarest cases. At least in my experience. But I don't want to list my own arguments at this time. So I ask you, what do you think about ORM? What are the pros and the cons? | There's a is a fairly large and varied set of conceptual and technical difficulties when trying to approach a relational database from an object oriented angle. These difficulties are collectively known as object-relational impedance mismatch and the related Wikipedia article is extremely informative. The article identifies quite a few, I don't see any sensible way of describing them here. Just to give you a general idea, they are catalogued as: Mismatches Object-oriented concepts Data type differences Structural and integrity differences Manipulative differences Transactional differences Solving impedance mismatch Minimization Alternative architectures Compensation Contention Philosophical differences I think if you take the time to read the article you'll understand that the fact that ORM is sometimes described as an anti-pattern is in fact inevitable. The two domains are so different that any approach to treat one as the other is by default an anti-pattern, in the sense that an anti-pattern is a pattern that goes against the philosophy of a domain. But I don't think the term should apply to anything that essentially acts as a bridge between two vastly different domains. Labelling a pattern as anti-pattern makes sense only within its domain. So the question of whether it's an anti-pattern or not is irrelevant. But is it useful? Yes ORM is one of the most useful anti-patterns out there. You will understand why only if you find yourself in a practical situation where you'll have to swap databases in a project. Or even upgrade to another version of the same database. ORM is one of those things, that you only fully understand when you actually need them. Off course, as everything useful, ORM is highly prone to abuse. If you think it somehow replaces the need to know everything about the database you work on, then it will come back and bite you. Hard. Finally, let me shamelessly plug another one of my answers , on the related "Does the ActiveRecord pattern follow/encourage the SOLID design principles?" question, which to me is a far more relevant question than "is it an anti-pattern". | {
"source": [
"https://softwareengineering.stackexchange.com/questions/120321",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/40897/"
]
} |
120,355 | I am looking for a recommendation here. I am struggling with whether it is better to return NULL or an empty value from a method when the return value is not present or cannot be determined. Take the following two methods as an examples: string ReverseString(string stringToReverse) // takes a string and reverses it.
Person FindPerson(int personID) // finds a Person with a matching personID. In ReverseString() , I would say return an empty string because the return type is string, so the caller is expecting that. Also, this way, the caller would not have to check to see if a NULL was returned. In FindPerson() , returning NULL seems like a better fit. Regardless of whether or not NULL or an empty Person Object ( new Person() ) is returned the caller is going to have to check to see if the Person Object is NULL or empty before doing anything to it (like calling UpdateName() ). So why not just return NULL here and then the caller only has to check for NULL. Does anyone else struggle with this? Any help or insight is appreciated. | StackOverflow has a good discussion about this exact topic in this Q&A . In the top rated question, kronoz notes: Returning null is usually the best idea if you intend to indicate that
no data is available. An empty object implies data has been returned, whereas returning null
clearly indicates that nothing has been returned. Additionally, returning a null will result in a null exception if you
attempt to access members in the object, which can be useful for
highlighting buggy code - attempting to access a member of nothing
makes no sense. Accessing members of an empty object will not fail
meaning bugs can go undiscovered. Personally, I like to return empty strings for functions that return strings to minimize the amount of error handling that needs to be put in place. However, you'll need to make sure that the group that your working with will follow the same convention - otherwise the benefits of this decision won't be achieved. However, as the poster in the SO answer noted, nulls should probably be returned if an object is expected so that there is no doubt about whether data is being returned. In the end, there's no single best way of doing things. Building a team consensus will ultimately drive your team's best practices. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/120355",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/18497/"
]
} |
120,477 | A fellow developer has started work on a new Drupal project, and the sysadmin has suggested that they should only put the sites/default subdirectory in source control, because it "will make updates easily scriptable." Setting aside that somewhat dubious claim, it raises another question -- what files should be under source control? And is there a situation where some large chunk of files should be excluded? My opinion is that the entire tree for the project should be under control, and this would be true for a Drupal project, rails, or anything else. This seems like a no-brainer -- you clearly need versioning for your framework as much as you do for any custom code you write. That said, I would love to get other opinions on this. Are there any arguments for not having everything under control? | I would say that the minimum that source control should contain is all of the files necessary to recreate a running version of the project. This even includes DDL files to set up and modify any database schema, and in the correct sequence too. Minus, of course, the tools necessary to build and execute the project as well as anything that can be automatically derived/generated from other files in source control (such as JavaDoc files generated from the Java files in source control). | {
"source": [
"https://softwareengineering.stackexchange.com/questions/120477",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/37977/"
]
} |
120,716 | I learned REST and it feels a lot like CRUD (from what I have read about CRUD). I know they are different, and I wonder if thinking they are similar means I don't understand them. Is it that REST is a "superset" of CRUD? Does it do everything CRUD does and more? | Surprisingly, I don't see in the other answers what I consider the real difference between REST and CRUD: what each one manages. CRUD means the basic operations to be done in a data repository. You directly handle records or data objects; apart from these operations, the records are passive entities. Typically it's just database tables and records. REST, on the other hand, operates on resource representations, each one identified by an URL. These are typically not data objects, but complex object abstractions. For example, a resource can be a user's comment. That means not only a record in a 'comment' table, but also its relationships with the 'user' resource, the post that comment is attached to, maybe another comment that it responds to. Operating on the comment isn't a primitive database operation, it can have significant side effects, like firing an alert to the original poster, or recalculating some game-like 'points', or updating some 'followers stream'. Also, a resource representation includes hypertext (check the HATEOAS principle), allowing the designer to express relationships between resources, or guiding the REST client in an operation's workflow. In short, CRUD is a set primitive operations (mostly for databases and static data storages), while REST is a very high-level API style (mostly for web services and other 'live' systems). The first one manipulates basic data, the other interacts with a complex system. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/120716",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/41133/"
]
} |
120,872 | Here's the problem I'm facing: Quote From Project Manager: Hey Spark, I'm assigning you the task of developing a framework that could be used for many different iOS applications. Here are the requirements: It should be able to detect the thickness of the thumb or fingers being used to manipulate the UI. With this information, all elements of the UI should be arranged & sized automatically . For a larger thumb, elements should be arranged nearer the center of the screen. For a smaller thumb, elements should be arranged nearer the corners of the screen. For a larger thumb, all fonts should be smaller. (We're assuming an adult in this case.) For a smaller thumb, all fonts should be larger. (We're assuming a younger person in this case.) Summary: This framework is required for creating user-friendly user interfaces programmatically. The framework should be developed in such a way that we can use for as many projects as needed, so it must also be very developer-friendly. I am the developer given this task, so my questions are as follows: How can I explain that these requirements are a little ridiculous? How can I explain that it would be better to concentrate on developing actual projects? How can I explain that even if this were possible, I wouldn't recommended developing such a thing? How do I say NO to this project politely, gently, and respectfully? How can I explain that even for a developer with 3 years of experience, this might not be possible? | If you get a set of requirements that are physically impossible to implement as the device does not support and cannot support the wanted functionality, you need to explain this to the person creating the requirements. You should be respectful and explain why the requirements are not possible to implement (i.e. the touch screen cannot distinguish between a thumb, finger or stylus. It does not have enough resolution in order to detect finger width.) - keep things factual , pointing to existing documentation if there is any. Do not go into any sort of emotional argument and keep cool and professional. Saying to anyone that their requirements are silly is never a winning strategy. See if you can get an understanding of the actual goals for the feature - why it is seen as a requirement. This could lead you to a different, better feature that will solve the need. (thanks @spoike) @DarkStar33 suggests in the comments to do the research and provide an actual estimate of how much the project will cost and how long it will take, with the assumption that the result will be too expensive and lengthy to be worth it. Being armed with numbers and the data to back them up can certainly help your case, though I would still look at the business goals to see if they can be met (even partially) in another fashion. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/120872",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/11330/"
]
} |
120,956 | In the coming months we're going to begin a project where we take a system we've built for a client (v1) and rebuild it from scratch. Our goal with v2 is to make it modular, so that this specific client will have their own set of modules they use, then another client may use a different set of modules altogether. The trick here is that Company A might have a series of checkout and user modules that change how that system works. Company B might stick with the standard checkout procedure but customize how products are browsed. What are some good approaches to application architecture when you're building an application from scratch that you want to have a Core that's shared among all clients while still maintaining the flexability for anything to be modified specifically for a client? I've seen CodeIgniter's hooks and don't think that's a good solution as we could end up with 250 hooks and it's still not flexible enough. What are some other solutions? Ideally we won't need to draw a line in the sand. | To achieve highly organized and decoupled modularity, you can follow the Hierarchical MVC architectural pattern, sometimes known as Presentation–abstraction–control (although they aren't strictly the same pattern). Kohana , Alloy , Fluency and FuelPHP support HMVC natively * and Kohana's HMVC approach is discussed in Scaling Web Applications with HMVC and Optimising HMVC Web Applications for Performance , by Sam de Freyssinet . Unfortunately CodeIgniter doesn't support HMVC natively. I've built my own libraries to provide some sort of HMVC support on CodeIgniter, taking some inspiration from wiredesignz's codeigniter-modular-extensions-hmvc . There's a very nice introduction to HMVC article at nettus+, that discusses CodeIgniter and wiredesignz's extension. The following image and quote are from that tutorial: Each triad functions independently from one another. A triad can request access to another triad via their controllers. Both of these points allow the application to be distributed over multiple locations, if needed. In addition, the layering of MVC triads allows for a more in depth and robust application development. This leads to several advantages which brings us to our next point. Finally, you are on the right track with hooks, even if you adopt an HMVC architecture, there is going to be some problems you'll still need to solve with hooks, depending on your implementation and the level of automation that you are looking for. A good use of hooks would be a pre_controller hook that would make sure all dependencies for installed modules are present, for example. * There may be others I don't know of. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/120956",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/9348/"
]
} |
121,001 | Code Retreat is an all-day training event that focuses on the fundamentals of software development. There's a "global" code retreat day coming up, and I'm looking forward to it. That said, I've been to one before and have to say there was a huge amount of chaos... which is fine. One thing that I still don't get is why the "Game of Life" is a good problem for TDD, and what good and bad TDD for it feels like. Realize this is a pretty open ended question, so feel free to comment. | Originally, Conway's Game of Life was chosen because we had a java applet on hand to work on at the very first coderetreat in January of 2009. The goal of the day was to experiment with some ideas around time-boxed practice, and we just chose the GoL applet because we had it. After that, though, as a couple of active facilitators (notably me during my journeyman tour in 2009 and Alex Bolboaca in Bucharest) investigated the uses of GoL as a learning tool. At the same time we were evolving the coderetreat format to what it has become today. In 2009, Alex tried at least one other problem (poker hand scoring), but didn't find it as useful as GoL. You can find more about the history at http://coderetreat.org/history Coderetreat focuses on improving our understanding of simple design (specifically the 4 rules of simple design), test-driven development and other fundamental aspects of software development. GoL has the benefit of being a very simple problem to understand while still being very rich from a structural perspective. It readily provides parts of the system that can be used as examples of all the topics we practice at coderetreat. For example, a common implementation that takes (x,y) parameters in multiple methods is a great opportunity to talk about the DRY principle (every piece of knowledge should have one and only one representation in your system) with regards to the topology of the system. There are a lot of other aspects which can be used as examples of building a design that minimizes the cost of change. There are quite a few people now who have done multiple coderetreats, and they still find interesting aspects of the problem to use as practice. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/121001",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/4942/"
]
} |
121,121 | Let's say I have a method: public void DoSomething(ISomeInterface someObject)
{
if(someObject == null) throw new ArgumentNullException("someObject");
someObject.DoThisOrThat();
} I've been trained to believe that throwing the ArgumentNullException is "correct" but an "Object reference not set to an instance of an object" error means I have a bug. Why? I know that if I was caching the reference to someObject and using it later, then it's better to check for nullity when passed in, and fail early. However, if I'm dereferencing it on the next line, why are we supposed to do the check? It's going to throw an exception one way or the other. Edit : It just occurred to me... does the fear of the dereferenced null come from a language like C++ that doesn't check for you (i.e. it just tries to execute some method at memory location zero + method offset)? | I suppose if you are immediately dereferencing the variable, you could debate either way, but I would still prefer the ArgumentNullException. It is much more explicit about what is going on. The exception contains the name of the variable that was null, whereas a NullReferenceException does not. Particularly at the API level, an ArgumentNullException makes it clear that some caller did not honor the contract of a method, and the failure was not necessarily a random error or some deeper issue (though that may still be the case). You should fail early. In addition, what are later maintainers more likely to do? Add the ArgumentNullException if they add code before the first dereference, or simply add the code and later on allow a NullReferenceException to be thrown? | {
"source": [
"https://softwareengineering.stackexchange.com/questions/121121",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/5512/"
]
} |
121,128 | I am interested in learning concurrent programming, focusing on the application/user level (not system programming). I am looking for a modern high level programming language that provides intuitive abstractions for writing concurrent applications. I want to focus on languages that increase productivity and hide the complexity of concurrent programming. To give some examples, I don't consider a good option writing multithreaded code in C, C++, or Java because IMHO my productivity is reduced and their programming model is not intuitive. On the other hand, languages that increase productivity and offer more intuitive abstractions such as Python and the multiprocessing module, Erlang, Clojure, Scala, etc. would be good options. What would you recommend based on your experience and why? EDIT: Thanks everybody for your interesting answers. It's hard to make a conclusion without actually trying since there are many good candidates: Erlang, Clojure, Scala, Groovy, and perhaps Haskell. I voted the answer with the most convincing arguments, but I'll try all the good candidates before deciding which one to pick :) | You almost certainly should look at Clojure - in my opinion it's the best modern language for multi-core programming and it is extremely productive. Key attributes: It's a functional language , which is a boon for both concurrency and your ability to develop using higher level abstractions. It features fully immutable persistent data structures and lazy sequences which will be familiar to anyone with experience in functional languages like Haskell. It features a very novel software transactional memory system for concurrent lock-free access to mutable state. Making code concurrency-safe is often as simple as wrapping it in a (dosync....) block. It's a Lisp - which makes it extremely powerful for macro-based metaprogramming and code generation. This can bring significant productivity advantages (essay by Paul Graham - "Beating The Averages") It's a JVM language - so not only do you get access to the huge array of libraries and tools in the Java ecosystem, you also benefit from the huge engineering effort that has gone into making the JVM an effective platform for concurrent server-side applications. For practical purposes, this gives it a huge advantage over languages that don't have this kind of foundation to build upon. It's dynamic - which results in very concise code and a lot of productivity. Note however that you can use optional static type hints for performance if needed. The language is designed around abstractions which is somewhat hard to explain but the net effect is that you get a set of relatively orthogonal features that you can combine to solve your problems. An example would be the sequence abstraction, which enables you to write code that deals with every "sequential" type of object (which includes everything from lists, strings, Java arrays, infinite lazy sequences, lines being read from a file etc.) There's a great community - helpful, insightful but most importantly very pragmatic - the focus in Clojure is generally on "getting things done". Some mini code samples with a concurrency slant: ;; define and launch a future to execute do-something in another thread
(def a (future (do-something)))
;; wait for the future to finish and print its return value
(println @a)
;; call two functions protected in a single STM transaction
(dosync
(function-one)
(function-two)) In particular, it's worth watching one or more of these videos: http://www.infoq.com/presentations/Value-Identity-State-Rich-Hickey http://www.infoq.com/presentations/Are-We-There-Yet-Rich-Hickey http://www.infoq.com/presentations/Clojure-The-Art-of-Abstraction | {
"source": [
"https://softwareengineering.stackexchange.com/questions/121128",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/21030/"
]
} |
121,187 | Is there any cons of making your employees use open-source programs in your company? I am planning to start a bussiness and I wonder why companies usually work with proprietary software, as Microsoft Word to quote the most famous one. Why do not they use Open Office (or Libre Office ) etc.? From my point of view, you can save a lot of money and help the open-source community by, for instance, giving them part of your benefits in form of donations. I do not know any (medium-big) company that does this. Probably you could give me some examples, just to prove that this model of open-source usage/collaboration works rocks. | I discourage setting a hard "open-source only" rule. There are so many criteria involved in selecting software and it is a mistake to always base a decision on just one factor. Lets assume that you will be employing, in addition to technical programmers, several non-technical staff members. Secretaries, accountants, human resources, managers, etc. How much time will be wasted when they try to learn Linux and Open Office? Especially when you're hiring business people who have spent half their lives mastering Excel, do you really want those skills to go to waste? All other things being equal, I would choose open source every time. All other things are never equal. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/121187",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/40946/"
]
} |
121,224 | I see very often people using term binaries in different context. What are binaries? Collection on binary files, installation files, .dll files or what? Or is it just an general term for some collection of files on disk? | Binary means composed of two pieces or two parts and may refer to different things in different worlds of Mathematical, Computing, Science and Others. But, in Computing , Binary refers to :- Binary file , composed of something other than human-readable text Executable , a type of binary file that contains machine code for the computer to execute Binary code , the digital representation of text and data | {
"source": [
"https://softwareengineering.stackexchange.com/questions/121224",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/27133/"
]
} |
121,289 | I've been tasked with developing requirements and specifications for a project our group is starting. I realized that I don't know the difference; a Google search just confused me more -- it seems some people say that specifications are requirements, but at a lower level. | The sound-bite answer is that requirements are what your program should do, the specifications are how you plan to do it. Another way to look at it is that the requirements represent the application from the perspective of the user, or the business as a whole. The specification represents the application from the perspective of the technical team. Specifications and requirements roughly communicate the same information, but to two completely different audiences. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/121289",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/-1/"
]
} |
121,328 | I read some code of a colleague and found that he often catches various exceptions and then always throws a 'RuntimeException' instead. I always thought this is very bad practice. Am I wrong? | I do not know enough context to know whether your colleague is doing something incorrectly or not, so I am going to argue about this in a general sense. I do not think it is always an incorrect practice to turn checked exceptions into some flavor of runtime exception . Checked exceptions are often misused and abused by developers. It is very easy to use checked exceptions when they are not meant to be used (unrecoverable conditions, or even control flow). Especially if a checked exception is used for conditions from which the caller cannot recover, I think it is justified to turn that exception to a runtime exception with a helpful message/state. Unfortunately in many cases when one is faced with an unrecoverable condition, they tend to have an empty catch block which is one of the worst things you can do. Debugging such an issue is one of the biggest pains a developer can encounter. So if you think that you are dealing with a recoverable condition, it should be handled accordingly and the exception should not be turned into a runtime exception. If a checked exception is used for unrecoverable conditions, turning it into a runtime exception is justified . | {
"source": [
"https://softwareengineering.stackexchange.com/questions/121328",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/13137/"
]
} |
121,465 | I've recently quit my full time developing job at mega-corp, and I decided that I'll look for a part time job. Since then I've talked to half a dozen potential employers, and every one of them had the same reaction when I said the magic words "part-time" - they all closed up and became suspicious. Now, I understand that it might just be me, so as control I asked every one of them what if I were willing to work full time, and they all said I would probably get an offer. My question is two fold: Why, as an employer, would you give up a competent, even great, developer, simply because he wants to work 3 days a week and not 5? How do I sell the story of part time job better? I usually just list my reasons which are that I prefer that balance currently in my life and that I want to work on my own projects, but it leaves them even more suspicious - am I going to start something myself and quit? Am I just lazy? | Why, as an employer, would you give up a competent, even great, developer, simply because he wants to work 3 days a week and not 5? More than one reason (all argued from the point of view of an employer): As Fred Brooks argues in the book The Mythical Man-Month , the efficiency of a team goes down as the team size grows, because the amount of communication grows faster than linear with the team size. So N full-time developers are far more effective than 2N part-time developers, at the same cost. If the developer is working on some important system, you want to be able to reach her at least during normal business hours. A full time employee spends only eight hours at the office five days a week, but his mind is really working for the company 24 hours a day, seven days a week. That's why you sometimes wake up in the morning with the solution to a problem that's been bothering you for days - your mind doesn't stop working the moment you leave the office. For a part-time employee, I would fear the opposite: Instead of thinking about his day-job at home, I'd guess she'd think about her private problems at work. How do I sell the story of part time job better? Actually, I think the employers are mostly right, so I don't think you can "sell" it much better. But you could find a small company that doesn't have enough work for a full-time employee. They might be interested in hiring you part-time. Jobs like that probably wouldn't be very glamorous (or well-paid), though. EDIT :
Your comments suggest that your don't have much working experience and that you can't imagine spending 40 hours per week at work. I can totally relate to that, sitting 40 hours a week in the cubicle next to Dilbert and Wally does sound like a horrible prospect. If that's reasonably close, forget my advice about looking for a job at a company that doesn't have enough work for a full-time employee. These jobs certainly exists, but you won't learn much there (if they have only one part-time developer, who would teach you anything?), and they don't look very good on a resume (why did he start his career doing that ? Couldn't he find a full-time job?). They'd probably be rather boring jobs, too, and IMHO spending 20 hours a week at a boring job is worse than spending 40 hours a week working on something you care about. Instead try to find a full-time job where you get to build something interesting, where you like your co-workers and where you genuinely like going to work each morning. These jobs really are out there, and they're easier to find than part-time positions. There's still enough time to play StarCraft at the weekends ;-) | {
"source": [
"https://softwareengineering.stackexchange.com/questions/121465",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/41440/"
]
} |
121,555 | Trailing whitespace is enough of a problem for programmers that editors like Emacs have special functions that highlight it or get rid of it automatically, and many coding standards require you to eliminate all instances of it. I'm not entirely sure why though. I can think of one practical reason of avoiding unnecessary whitespace, and it is that if people are not careful about avoiding it, then they might change it in between commits, and then we get diffs polluted with seemingly unchanged lines, just because someone removed or added a space. This already sounds like a pretty good reason to avoid it, but I do want to see if there's more to it than that. So, why is trailing whitespace such a big deal? | Reasons that it's important to me: When I hit the "End" key, I expect the cursor to go to the end of the line (depending what editor I'm using and how it's configured) and I expect to be able to start typing right away. If the cursor gets there and I have to delete 3 or 4 characters, it's annoying. Especially if it's inconsistent throughout the file(s). When I have string literals that span multiple lines, trailing whitespace can make the output string look incorrect when it's used. While not strictly programming, whitespace can seriously mess up data entry, if there's trailing/leading in a file that will be parsed and used as input to something else. This happens the most when a clean, generated input file gets dirtied by someone editing it in Excel or something and then trailing whitespace (and other formatting issueS) can creep in. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/121555",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/92/"
]
} |
121,653 | I need to learn how databases work in order to use them more efficiently, and my way of learning is by doing. I want to create my own database system. I am not referring to creating a pseudo-database that would use query to parse files; this would simply be a filesystem interface with a query language. I am talking about the actual structure of a database engine. And since what I have in mind is neither relational nor document-oriented (it's "node-oriented", if that even exists), I would need any resource to be as abstract and high-level as possible. So how would I go about creating that? What resources/tutorials/books can I read to understand? The language does not matter in the slightest. Ideally, the code would be pseudo-code to illustrate the concept, not tied to a particular language, but anything would do. I was not able to find anything on the matter on google (since I am so illiterate on the subject, maybe I am just not entering the right search). If such resources are not available, then I guess something about how to create a client would at least be a step in the right direction. | (it's "node-oriented", if that even exists) Start here. When dealing with a complex application like a database (even a simple database is a complex application), you should be familiar with the history of the domain and the proper terminology and have at least a very high level idea of the architecture. You could start from the Wikipedia article on Database . Spent a few days reading all the articles on the related concepts and the different database types. And since what I have in mind is neither relational nor document-oriented Next, you pick Relational or NoSQl. If you pick NoSQL, you should pick one type of NoSQL. That's extremely important, you won't find any architectural documents that discuss all different database families. It doesn't really matter which one you pick, just pick one and stick with it. The language does not matter in the slightest. Yes it does (unfortunately), because after you pick a database family you should start exploring code from open source databases of that family. There are a few generic guidelines on what to look for: Relatively small codebase, Architectural documents or at least a development blog, The database you pick should be close to what's considered generic in the family, it'd be harder to learn from if it's highly specialised. A few examples that fit: redis , flockdb , and RavenDB . Get the source, compile it and play around with it. You don't have to submit patches or anything that fancy, just explore the code and make small alterations here and there to see what happens. It's an incremental process, the more you play around with it the easier it'll be to understand what the code does. If the first project you picked seems extremely hard to understand, just move on to the next one. Another great option would be to concentrate on building an engine for MySQL, as @N.B. suggests in an earlier answer . If you do reach a point where you are able to do something useful with the codebase, get involved in the project's community, that's the easiest way to find more detailed resources on the concepts involved. And then, finally, start working on your database. At first you could just write up an extremely scaled down clone of the code you've been exploring. It doesn't have to be original, quite a few great projects started out as clones or forks. What resources/tutorials/books can I read to understand? There are quite a few books: Handbook of Relational Database Design Oracle Database 11g R2: Architecture & Internals Object-Oriented Database Systems: Concepts and Architectures Time-Constrained Transaction Management: Real-Time Constraints in Database Transaction Systems Understanding MySQL Internals Object Management in Distributed Database Systems for Stationary and Mobile Computing: A Competitive Approach Real-Time Database Systems - Architecture and Techniques Architecture of a Database System Foundations of Databases: The Logical Level Readings in Database Systems And a few hundred others, plus a myriad of academic papers you could easily trace via Google. You need to define what you want to do first, and then search for a book. Getting involved with a community of fellow database authors will also help you narrow down the list of books and perhaps get a lot better suggestions than the above. Good luck! I'm expecting a comment with a link to your repository when you're done. And if you're never done, make sure you leave a comment reminding me that I still haven't finished that compiler I started writing in 2001. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/121653",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/25092/"
]
} |
121,730 | I'm a physicist with a CS degree and just started my PhD at a tech company (wanted to do applied research). It deals with large scale finite element simulations. After reviewing their current approach, I think that a radically different method has to be applied (they are using a commercial tool which is very limited). I'd rather base my research on an open source finite element solver and write a program which makes use of it. I'd like to develop this idea in the evenings, because that's the time that best suits me for programming (during the day I prefer reading and maths) and use it at a late stage of my PhD. I'd like to have the option to release my program as open source on my website as a reference, for future personal or even commercial (e.g. consulting) use. How can I make sure that my company doesn't claim the code ownership? I thought that a version control system could help (check out only in the evening). This would document that I programmed not during regular office hours (documented elsewhere). But these data can be easily manufactured. Any other ideas? I want to stress that I'm not interested in selling software and neither is my company. Very interesting responses so far. This clearly helps me. Some remarks: I'm not restrained by my work contract. National law says that the company owns anything I produce during working hours and no special agreement has been made (my employer is not selling software and may be a bit naive on this side). They mostly use software and non of my colleagues is a serious programmer. Secondly, I need to rethink the point raised by @Mark about trade secrets. This is quite serious in the particular industry. Thirdly, I care a lot about no to upset my supervisor/ boss. But, and this is the motivation for this question, I'd like to keep the innovative part of my work a bit separated so I can reuse it or at least demonstrate it as a reference work. | Don't listen to anyone who says "your own time is your own time, just don't tell anyone!" because that's incredibly bad advice that is almost certain to land you in trouble, if not at your current job then at some future one. Not only do employment contracts vary too widely and significantly for any kind of generic advice to be useful, but different countries (including in the EU) or even different states within a single country (US) have different rules regarding how much of your work an employer owns, and even if you think you're on safe ground you can still get sued regardless, depending on how annoyed your employer is. Who has the deeper pockets for legal fees, you or your employer? I thought so. Get permission first and get it in writing , so that your butt is covered should it become an issue later (even years later, with some completely different set of management who suddenly freaks out over what the previous management was totally relaxed about - you can't predict the future!) | {
"source": [
"https://softwareengineering.stackexchange.com/questions/121730",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/34469/"
]
} |
121,888 | I'm looking to create a game in Java and would like it to work on Windows, Linux, and Mac. I'm pretty sure C# is a bad choice for this, and I don't have enough experience in C or C++. I want to stay away from Flash. Therefore, is Java a good choice for me? Mostly, I use C#, and think that Java is similar, so I assume it won't be that hard to learn. But is it fast enough? Is there a language more suited for my needs than Java? | Java is extremely suitable for writing cross-platform games. Main advantages: Portability - In general, you can write a Java game and expect it to run unchanged on most platforms. It's probably the most portable option of any language - C/C++ is the other highly portable option but needs to be recompiled for each platform and in many cases libraries have platform specific features that limit portability. Performance - Java code, if written well, will perform pretty much as well as any other language, including C/C++. The JVM JIT compiler is extremely good. You can write a top quality, successful game in Java (Minecraft, for example). Libraries - there are a huge range of libraries available for Java that cover almost every feature you could want in games, from networking to graphics to sound to AI. Most Java libraries are open source. The main decision you will have to take is what GUI framework you are going to use. There are quite a few different options but the most prominent ones are: jMonkeyEngine - fully fledged 3D engine. If you want to make a 3D game this is probably your best choice - contains lots of game engine features such as scene graphs, terrain generation etc. LWJGL - a more low-level library with direct access to OpenGL. Likely to appeal to you if you want maximum performance and don't mind writing a lot of your engine from scratch. Swing - has the advantage of being extremely portable and is included in the Java runtime so doesn't need an extra dependency. It's good for non-graphically-intensive 2D games (strategy games, card games etc.) Slick - a 2D game library based on LWJGL. Probably good if you want to write a 2D game but still need good graphics performance (shoot-em-ups, scrolling platform games etc.) JavaFX - designed for rich internet applications, roughly like Flash. Has a lot of neat features that would be good for games although I haven't seen it used much yet. JavaFX 2.0 in particular is looking quite promising. The main disadvantages for Java for gaming are really around the "edge cases" that probably won't affect you but are relevant for some classes of game: 3D engine availability - although the tools and engines listed above are good, they still aren't quite up to the level of C/C++ engines like the Unreal Engine used by professional game companies. So Java is possibly not goingto be your first choice if you are are trying to develop a major FPS with a multi-million budget - C/C++ still wins out here. GC latency Java garbage collection is overall a huge benefit, but it can cause slight pauses when GC cycles happen. This is getting much better with new low-latency JVMs, but still can be an issue for games with very low latency requirements (first person shooters perhaps). A workaround is to use low latency libraries like http://javolution.org/ , however these seem to be targetted more at high frequency trading or realtime systems rather than games. Lack of ability to exploit low level optimisations - while the Java JIT compiler is incredibly good, it still enforces some constraints that you can't avoid (bounds checking on arrays, for example). If you really need to get native machine-code level access to optimise this kind of thing away, then again you will probably prefer C/C++ over Java. Note that there are also a few deployment options to consider: Applet - runs in a browser, very convenient for users, however applets are rather restricted in what they can do for security reasons. Note that you can sign applets to get extra security privileges, although this will cause a slightly scary prompt for most users. Java Web Start - better for more sophisticated games that need a full local download and also need to access local system resources. It also works in a pretty platform-independent way. Probably the best route for a medium-sized game or something that needs to escape applet security restrictions. Installer download - You can write an installer for a Java game just as you could for any other language. It's a bit more work to configure and test of course, since installers tend to have some platform-specific features. Web - you could write a HTML5 web application and make use of the strength of Java purely on the server side. Worth considering for a multiplayer web game. Finally, it's also worth considering some of the other JVM languages - these have all the benefits of the Java platform listed above, but some consider them to be better languages than Java itself. Scala, Clojure and Groovy would be the most prominent ones, and they can all make use of the Java tools and libraries listed above. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/121888",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/41599/"
]
} |
121,998 | My understanding is that: MIT -licensed projects can be used/redistributed in BSD -licensed projects. BSD-licensed projects can be used/redistributed in MIT-licensed projects. The MIT and the BSD 2-clause licenses are essentially identical . BSD 3-clause = BSD 2-clause + the "no endorsement" clause Issuing a dual license allows users to choose from those licenses—not be bound to both. If all of the above is correct, then what is the point of using a dual MIT/BSD license? Even if the BSD refers to the 3-clause version, then can't a user legally choose to only abide by the MIT license? It seems that if you really want the "no endorsement" clause to apply then you have to license it as just BSD (not dual). If you don't care about the "no endorsement" clause, then MIT alone is sufficient and MIT/BSD is redundant. Similarly, since the MIT and BSD licenses are both " GPL-compatible " and can be redistributed in GPL -licensed projects, then dual licensing MIT/GPL also seems redundant. | My understanding is that: MIT-licensed projects can be used/redistributed in BSD-licensed projects. True (but unless there are modifications, the users can get it from the sources also. BSD-licensed projects can be used/redistributed in MIT-licensed projects. False The BSD 3-clause places additional restrictions on redistribution. The MIT and the BSD 2-clause licenses are essentially identical. True Although there is some ambiguity around whether some parts of the MIT license apply to binaries. BSD 3-clause = BSD 2-clause + the "no endorsement" clause True . Issuing a dual license allows users to choose from those licenses—not be bound to both. True . Similarly, since the MIT and BSD licenses are both "GPL-compatible" and can be redistributed in GPL-licensed projects, then dual-licensing MIT/GPL also seems redundant. No . Here is a major difference. MIT license and Apache License only requires that you give credit to original copyright holders. If you choose, you can redistribute source; but if you choose you can keep your new derived product without opening code. Hence, it is possible to use code developed under MIT and Apache—under a commercial license. If you ever use code with GPL-based license and happen to modify it, you must distribute your modified code as well under GPL. In other words, once any GPL codebase is used under a project, and if you want to publish that as a product, it has to be published with the source code and it has to be published under GPL. It cannot ever be a commercial license or closed source, and it cannot be any other license which is less strict than GPL. An example can take MIT, Apache or BSD license code, modified and distributed under GPL. Once a codebase is distributed as GPL, its further derived versions cannot be distributed under MIT, Apache or BSD license but must be GPL only. Edit: An example case of dual license: Suppose Nice Office is released under dual license—MIT and GPL. It has two possibilities. Some people can create NicePro Office, which can be commercial and sell. Whereas some other open source community creates a fork NiceOpen Office. In this case, it can enforce upon GPL distribution (of the original Nice Office as well as NiceOpen Office version) hence if you start with NiceOpen Office, you must comply to GPL only and not MIT license. The point is in case of dual license the first person who derives a license has a choice. He can choose either way - however, the second person needs to adhere to the choice the first person made. He/She cannot override the original rights of either generation and cannot in any way reduce the obligation of the applicable license. EDIT 2 Adding an interesting read - GPL and MPL Licenses has a serious conflict. Read this . | {
"source": [
"https://softwareengineering.stackexchange.com/questions/121998",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/41648/"
]
} |
122,002 | I am trying to figure out why we need URIs for XML namespaces and I cannot find a purpose for that. Can anyone brighten me a little showing their use on a concrete example? EDIT: Ok so for instance: I have this from w3schools <root
xmlns:h="http://www.w3.org/TR/html4/"
xmlns:f="http://www.w3schools.com/furniture">
<h:table>
<h:tr>
<h:td>Apples</h:td>
<h:td>Bananas</h:td>
</h:tr>
</h:table>
<f:table>
<f:name>African Coffee Table</f:name>
<f:width>80</f:width>
<f:length>120</f:length>
</f:table>
</root> So what should http://www.w3schools.com/furniture hold ? | A namespace is a way of saying "This kind of Foo" is different from "That kind of Foo", even though they are spelled the same. Or, if you prefer " MY kind of Foo" is different from " Everybody else's kind of Foo". The technical way of saying this is "The URI of my namespace for Foo" is different from everybody else's URI for their namespace for Foo. In other words, URIs are just strings that allow you to say so. The trick is then to say, "Hey, URLs are valid URIs", and then use a URI corresponding to a URL under your control. If everybody do that, then you avoid accidental namespace collisions. You could as well have said namespace "A" and namespace "B", but you risk that somebody else would use the same namespace too, and then your kind of Foo is not different from their Foo anymore which is exactly what you want to avoid. You can then add additional conventions to the URLs used as URIs, for instance, that the URL must correspond to a page containing documentation or XSDs or similar, but this is not necessary. It is just convenient. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/122002",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/39219/"
]
} |
122,009 | What is the name of the pattern in which individual contributors (programmers/designers) developed an artifact for the sole purpose is to serve as a diversion so that management can remove that feature in the final product ? This is a folklore I heard from an ex-colleague who used to work at a large game development company. At that company, it is well known that middle management is pressurized to "give inputs" and "make changes" to the product otherwise they risk being seen as not contributing to the project. This situation have delayed many projects because of these superfluous "management inputs". In one project at the above company, the artists and developers created a supernumerary animated character that appears in every cutscene and sticks out like a sore thumb. They designed it in such a way that it can be easily removed before the game is shipped (this was when games were still sold in physical media and not a downloadable product). Obviously the management then voted to remove the animation. On the positive side, management didn't introduced any unnecessary changes that would have delayed the project because they have shown that they provided constructive inputs to the product. This process pattern has a name among game programmers that work in corporates, but I forgot what was the actual name. I believe it's duck- something . Anybody can help pointing out the name and perhaps some rather credible reference to how the pattern develops?. | It’s called a duck , from a legend that allegedly comes from Interplay’s Battle Chess: This started as a piece of Interplay corporate lore. It was well known that producers (a game industry position, roughly equivalent to PMs) had to make a change to everything that was done. The assumption was that subconsciously they felt that if they didn’t, they weren’t adding value. The artist working on the queen animations for Battle Chess was aware of this tendency, and came up with an innovative solution. He did the animations for the queen the way that he felt would be best, with one addition: he gave the queen a pet duck. He animated this duck through all of the queen’s animations, had it flapping around the corners. He also took great care to make sure that it never overlapped the “actual” animation. Eventually, it came time for the producer to review the animation set for the queen. The producer sat down and watched all of the animations. When they were done, he turned to the artist and said, “That looks great. Just one thing—get rid of the duck.” | {
"source": [
"https://softwareengineering.stackexchange.com/questions/122009",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/21819/"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.