source_id
int64
1
4.64M
question
stringlengths
0
28.4k
response
stringlengths
0
28.8k
metadata
dict
89,786
I have a commercial Android project that I have created a client-specific version for (mainly UI tweaks). Since the bulk of the tweaks are in the res/ folder in XML files, we have the changes currently in a branch but I really don't want to have to port changes back and forth between branches in perpetuity. I'm curious if anyone has any source control strategies that will make managing multiple client-specific versions of an Android application as painless as possible. Also, not sure if this belongs here or StackOverflow. Feel free to transition over there if I guessed wrong.
Security through obfuscation is never good security. There must be better ways of protecting your intellectual property. And that is what you and your colleague should bring up as a joint concern with your manager. If management then decide that they don't want to spend the time or money on improved security, then both of you will have to live with that decision (it's not your product, it's the company's product) and better not spend (waste?) any more time on the subject.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/89786", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/6525/" ] }
89,912
I'm only a junior developer but my job forces me to work with really terrible PHP code (think about the worst PHP code you've seen; then think about code twice as bad). I usually try to fix bugs and fight with the codebase to add new features. Sometimes I'm ordered to get things working ASAP which more often than not involves dirty hacks. The product was previously open source and I am concerned that it might go open-source in the future. I would be ashamed if someone (especially potential employers) could find my name next some of the changesets. What can I do to protect my good name? I am not sure if this is relevant but I'll add that neither my boss nor my colleagues want to admit that the code is bad, but I'm not sure I can blame them for that -- for many of them this is their first job.
Rome wasn't built in a day, but you can be a good 'Boy Scout'. Every time you touch the code, leave it better than it was before. It doesn't take an extraordinary amount of time to use sensible function names, good coding standards and put decent comments in when you work. I think the danger is thinking it's all or nothing. Just because you can't spend the time you want to write elegant code, doesn't mean you have to totally give up and write garbage.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/89912", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/30397/" ] }
89,949
I'm tempted to create a final class CaseInsensitiveString implements CharSequence . This would allow us to define variables and fields of this type, instead of using a regular String . We can also have e.g. a Map<CaseInsensitiveString, ?> , a Set<CaseInsensitiveString> , etc. What are some of the pros and cons of this approach?
Case insensitivity is a property of the comparison, not of the object (*). You'll want to compare the same string independently of the case or not depending on the context. (And you have a whole kind of worms as what is a case insensitive comparison depend on the language -- i is upper cased as İ in Turkish -- and even the context -- depending on the word and the dialect ß can be uppercased as SS or SZ in German.) (*) It can be a property of the object containing the string, but that is somewhat different of being a property of the string itself. And you can have an class which has no state excepted a string, and comparing two instances of that class will use a case insensitive comparison of the string. But that class won't be a general purpose string as it won't provide methods expected for a general purpose strings and will provide methods which aren't. This class won't be called CaseInsensitiveString but PascalIdentifier or whatever is pertinent to describe it. And BTW, the case independent comparison algorithm will most probably be provided by its purpose and be locale independent.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/89949", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/30415/" ] }
90,015
To start I don't think this is a repeat of other questions on unit testing . What I'm looking for help with is articulating its value to a team of programmers, analysts, managers and testers. By automated tests, I don't think I need to make a distinction between unit tests (e.g. JUnit), BDD (e.g. JBehave, Fitness) and UI (Selenium , Watir) because I think they all provide similar value (but feel free to write an answer that disagrees :)) The following is a list I've identified, I'm looking for answers that help expand or refine: Time/Cost Savings : writing automated tests can take more time than written test cases. However, considering tests are run multiple times, the marginal work (i.e. cost/time) to execute automated tests is several orders of magnitude less. That the automated tests are cheap to run facilitates changing the system over time. Documentation : there is no truer way to know how a system works than its tests. Any other documentation is usually out of date the moment its written, but tests (at least those that pass) reveal how things actually work. This is true for both end-user AND API documentation. Code Quality : test writing forces you to: consider clients because tests are a client breaks dependencies where making code testable often means figuring out how to make that code not require some other large system be available
A few of my thoughts: Be honest that writing automated tests will take more time. If you're doing unit level TDD (which I would recommend as a starting point if you're going to invest in automated testing), you can expect about 30% extra time needed to code a feature. The key here is explaining that this extra 30% (which is probably higher than 30% in the beginning as your team learns how to write good tests) is an investment built to save costs over time. As least with unit level TDD, the design of your system is loosely coupled and highly cohesive, which makes your system adaptable to change over time. New requirements and unexpected bugs always require changes to your system, so keeping your system in a state where change is easy (from the nice design that comes from TDD) and safe (the set of automated tests that continually verify your system) means that it will cost less money to make those changes. There's lots of debate about the value of Acceptance level and UI level tests given the amount of time it takes to write these tests, how long it takes to run them, and how much maintenance they require. I'd recommend reading this article by James Shore about this. In the world of automated testing, there are good ways and bad ways to do it. If you are pitching automated testing to your management, I would pitch alongside it how you're planning on getting your team trained in writing good tests. The Art of Unit Testing by Roy Osherove, Working Effectively With Legacy Code by Michael Feathers, and The Art of Agile Development by James Shore are all great books that deal with these topics directly or indirectly. You should also look into some sort of coach or formal training as well. It's a big change. In terms of Business Value, #2 and #3 of your points above actually serve your first point, so I'd hammer home on point #1 and talk about how #2 and #3 serve that greater point. Documentation makes your system more understandable, which makes your team work faster. Code Quality makes your system adaptable to change, which makes your team work faster. For business people, it's all about maximizing the flow of value from the time an idea is pitched to the time the idea is delivered as working software.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/90015", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/6102/" ] }
90,024
Completing my Computing A-level in 2003 and getting a degree in Computing in 2007, and learning my trade in a company with a lot of SQL usage, I was brought up on the idea of Relational Databases being used for storage. So, despite being relatively new to development, I was taken-aback to read a comment (on https://softwareengineering.stackexchange.com/q/89994/12436 ) that said: [Some devs] despise [SQL] and think that it and RDBMS are a fad Obviously, a competent dev will use the right tool for the right job and won't create a relational database when e.g. flat file or another solution for storage is appropriate, but RDBMs are useful in a massive number of circumstances, so how could they be considered a fad?
The key is in the R in the RDBMS, which stands for relational. Contrary to popular belief it doesn't mean relations between tables, but rather the fact, that each table is relation in mathematical sense of the word . Relational model has quite significant implications. You have to model your data to fit relations and normalize that model . If your application is designed as object-oriented model, relational model is not a good fit. This is widely known as object-relational impedance mismatch . One approach to this mismatch are ORMs (object-relation mappers), which have gained a lot popularity. But they are not the true solution, they are more like work around for the problem. They still don't really solve the problem of mapping class inheritance to relational model. The true solution to the object-relational mismatch are OODBMSes , which didn't get much traction unfortunately. Popular engine supporting OOBDs natively is PostgreSQL, which is hybrid OO/RDBMS. Another OODBMS is Zope Object DB , which is built in Python and in typical setup uses RDBMS as underlying engine. Alternative approach is to have more logic implemented in application or middle-ware level and use NoSQL solution for underlying storage. Neither OODBMS nor NoSQL are "just a flat-file".
{ "source": [ "https://softwareengineering.stackexchange.com/questions/90024", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/12436/" ] }
90,052
Why is there so much buzz about closures among developers? In my career I never intentionally used them, though don't clearly understand what they are. UPD: just to clarify. The question is about why the closure concept became so talky these days.
A closure is code that remembers the world where it came from while still being usable where it has been brought to. An example is defining an anonymous function in Java which knows that it is inside YourObject and can manipulate its methods and functions. This function is then delivered to e.g. Swing where it goes deep inside e.g. a Listener but still has a lifeline back to its roots. This is a very powerful concept as it allows you to deliver code which - unbeknownst to the code using it - can reach back into other parts of the code.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/90052", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/-1/" ] }
90,059
I've been told that I am to be a team lead of an upcoming project. I've not ever been team lead before but the responsibilities are what you would typically expect, with revolving door of 3 to 4 other developers through the 8 or 9 months it takes to complete the project. My problem is this: one of the developers who will no doubt be working on this project will be a problem. He has more experience than me, has called me an idiot several times in the past, and has told me that he has taken this job because he is a natural leader. He has expected to be promoted to a leadership position with every new project (which has not happened to date), and even once told me that I was to report to him, even though the actual team lead was under no such illusion. Moreover, I've both observed and heard from others that he is extremely unprofessional (watches non-work videos at client sites - with no head phones, dresses unprofessionally, comes to work late, makes inappropriate jokes, etc.) He took or attempted to take credit for my work or insights multiple times while I worked with him as a peer. My current team lead told me that she threw out 1/2 of this guy's code once he left her project because the quality was no good. I could go on. My fear is that this guy will actively work against me, because he will resent having to report to someone that he considers an inferior, especially since I have been given this opportunity before him. I've successfully dealt with this type of personality as a peer or even reporting to managers like this in the past. I've not dealt with, or ever thought about, having to deal with this type of character who is reporting to me. My question is: what kind of strategies can I use to effectively and professionally deal with this? Especially now, before it has become a problem, is there any way to cut it off before it gets out of hand etc. If anyone has a similar experience how did they deal with it?
The other answers I'm seeing here ("short leash!", "document everything!", "be professional!") are nice and all, and they're superficially correct, but they fall short in that they fail to consider the human aspect of the situation. The thing is, everyone is a human, and humans are different, and their motivations and thought-processes and skills are all different. And while the somewhat algorithmic answers I'm seeing here are technically correct, they do not seem to take into account the nature of the way that human beings work and the nature of what true management can be. You can't begin to solve someone's problem until you understand them. Which means, understanding their motivations. Which is why, in fact, good people management can be a lot more like psychoanalysis than you'd think. Here's an example of how the thought process might go. Joe thinks he should be leader because he has more experience than me, but he's delusional. OK, what does that tell us? Is Joe unusually concerned about status? Is he bothered by the lack of respect? Does he chafe at being called Joe instead of DOCTOR Joe in recognition by his PhD? Where is this insecurity coming from? How can you address it? Joe's code is not very good. Half of his code had to be thrown out. Why? How? Is he sloppy? Rushed? Just not very smart? Is he cocky, thinking his code is awesome? Why? Is this that insecurity again? Joe is unprofessional and watches videos at the client site. Again... why? Is it because he thinks he's a rock star, and should be indulged for wearing hoodies and watching collegehumor videos because his work is sooooo amazing? Or is it because he is not very self-aware, and does not realize the image he is portraying? Does he genuinely not GET what it means to be professional, or is he actively rebelling against the idea of professionalism? I'm not sure. The answer is there to be found. How do you manage this? Again, you start looking at the human/emotional problems, and address them as such. One thing I'm sensing strongly from the description you've provided is insecurity . It seems like Joe may be genuinely insecure, and he's hiding behind a veneer of arrogance. Where does that come from? Maybe he didn't go to a highly selective university? Maybe he's a VB developer in a world of C++ developers? I don't know, you didn't tell me. I'm not even sure if insecurity is the issue here, because I don't know this guy. But bear with me for a bit--I just want you to see what it means to look at the human issues of management, not the superficial, "how do I enforce my will on this guy" issues of management. So where would this lead us? Suppose you think about it, you have a few long conversations with Joe, and you decide that he is feeling insecure about something. How can you make him feel more comfortable? Will that make him happier and better adjusted? Maybe he needs some specific training to be great at something. Maybe he needs to pair program with someone who can encourage him. Maybe he needs to feel more loved. Actually, that last one is almost always true. I've dealt with a lot of difficult management solutions, and the answer has always differed dramatically depending on the individual. A lot of times, they can't be fixed. But as a manager, you have to understand the individual as a human before you can start thinking about fixing them and fixing the situation and acting correctly in the situation in order to get the best outcomes. So, here's what I recommend you actually do as a course of action. Meet with his peers and former managers. Have a conversation that tries to get to the heart of what his human/emotional problems are. Is he immature? Just generally unintelligent? Unhappy? Depressed? Insecure? Arrogant? Emotionally unintelligent? All of those are different diagnoses for the real cause of the problem and all of them have different prescriptions. Talk to him personally and privately, at length. Let him do most of the talking. Ask open questions. A good one is "How did that make you feel?" It uncovers a surprising amount of stuff. You would be shocked to learn how much better I got at managing human beings when I learned to ask people "How did that make you feel?" Form a hypothesis of the root causes of his issues. Pick a course of action based on what you think the core problem is. It may work. It may not. If it doesn't, that's too bad, but life is too short, and you're not paid to solve his problems, only the company's problems, so follow whatever advice you see elsewhere in this thread to get rid of him. Once again, and I apologize for going on soooo long here: the other answers I'm seeing here are mostly summarized as "You need to be very strict." Well, being strict works well in one situation and one situation only: an emotionally immature person. If, indeed, his problem is emotional immaturity, that's the way to go. If his problem is depression caused by a cycle of learned helplessness, strictness will have the exact OPPOSITE effect than the one you want. Life, and management, and people, is just not that simple. There are many "diseases" that make a person be a bad employee and each one has it's own medicines. Some of them have cures. Some don't.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/90059", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/2260/" ] }
90,139
I just graduated from college with a degree in CS, so I would like to find a job where I can learn more about the field and build up some professional experience. I've interviewed at a company that uses their own in-house programming language, and I don't think any others use it. They have not mentioned using any other languages or what they use for a development environment. What should I be concerned by taking a job like this? If I were to switch jobs later, would I have to start looking for entry level positions again because I haven't gained any language-specific experience?
Run away, and run away quickly. Unless you're desperate for a job and are very hungry, this is a situation you want to steer clear of. I have experience with a company that did this, and the only reason that they did so was so that their employees wouldn't gain meaningful, transferable experience. It really was all about control. Others who said here that "programming is programming" are right, but I'd turn that on its head and ask, why not use some standard language for which there is external support, libraries, forums, and a pool of available programmers to choose from? The only time I think such a situation would be OK would be if the company-only language was for custom hardware. For example, you have to write everything for the 9000X Gamma-Ray Interferometer using an assembly/machine code specific to that machine.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/90139", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/30483/" ] }
90,175
I currently work for a company that has recently downsized. I do all in-house work, client installs, builds, QA, and, well, basically all the in-house work. My direct boss is VERY nontechnical and lately I have found it VERY hard to deal with his lack of knowledge. The biggest issues I have had are as follows: I am on many deadlines at a time. I get stopped to put together a half fast quote as I cannot be late on the deadline, in the meantime, three support calls comes in, I give quote, time too much in quote so they outsource it. I then I have to fix everything the vendor broke which puts me behind. The worst is if I eat "His buffer" on a project I wasn't even on I am expected to complete everything already scheduled while all these other things come up. I get asked, when an issue arrises, why is the issue occurring and explain in detail, yet that detail means absolutely nothing to him. All he cares about are deadlines, yet he is the one who schedules everything. "I am a programmer not a graphic designer, means nothing to him" I was hired as a .NET programmer, yet they let a vendor choose wordpress for many sites(yeah I had to learn all about it quick) I guess I can go on and on, but has anyone had to deal with this type of project manager? What is some advice, other than finding another job? I cannot leave my job at this time as I cannot lose my insurance right now as my wife is very ill with MS. I am looking for the best way to deal with my manager. Thanks in advance, and I made this a wiki, so please don't close. Here is another situation that happened today. We have a friend of mine who assists me on projects. He asked us "BOTH" to quote out a job and give a rough estimate. I came back to him and said" 7 weeks 6 hours day a day using my friend as a resource." He gave it to the client and added 10% buffer(24 hours). He then tells me that is all I get my friend for on the project. I wasn't asked how much time he was available for during the 7 weeks. Worst part is they already gave the quote to the client and didn't even have me review. His view is well you either get it done in the time I told him or find another job.
You are in a frantic and desperate mindset. Take a few deep breaths, clear your head and contemplate the following facts (and if your mind leaps to counterarguments and panic, start over with the breaths). If you're doing all the work, then they need you. If you die, so does their business. If you're working late nights and weekends then you are working at an unsustainable pace, tending toward a steady state of inefficiency and poor work. If you were somehow able to work decent hours, you'd actually get more done per day and get things finished sooner. (If your brain just said "But my manager--!" then start over with the breaths.) When your manager gives you an unreasonable goal and you half-kill yourself to get it done, you are rewarding him for his behavior . You will get more of what you reward. "This cannot be late." Yes it can. Read this one over a few times. Although you feel that he should reward you for hard work, you know that this is not true. This is not the path to success. If the task is not completed by the deadline (see #4), which will look worse: A) you accept the task with the look of a hunted animal, work like a demon and then cringingly admit that it is not ready on time, or B) you tell him calmly at the outset, and every day that it will not be ready by that date, but that it will be ready at that later date, you work calmly and steadily, it is not ready at the deadline but it is ready when you told him it would be. (Breathe, breathe.) The important thing here is your mindset: your goal must not be to achieve the impossible. Now that you can see there is another way, how do you communicate this to your boss? There are no miracles, but you can accomplish a lot by speaking his language. Document everything you do. Seriously. Take a little time to do this, even though you are under deadlines. Tech-illiterate managers love pretty pictures. Acquaint yourself with a professional-looking tool, one of those "schedulers" that they love. You must be able to produce timelines and graphs in pretty colors. Learn some buzzwords, especially the ones he (or his boss) uses. Now combine these things. When they ask you for a quote, work out a good one-- don't rush this--, pad it a little, give it to them, never ever negotiate a time estimate and whip up a timeline showing it. If possible, use the graph as your reply (if you can get them to start using your graphs, you've half won). If they outsource the job and you have to fix the problems, give them a quote for that, whether they ask for it or not; in the end you will have a graph that shows A) the four weeks they wanted, B) the six weeks you quoted and C) the eight weeks it actually took because they outsourced it; label this so that an idiot could understand it: "two week overrun due to outsourcing". Come to every meeting armed with figures, graphs, buzzwords. If you do this right you'll be amazed at how they accept whatever is on the graph, and how they see the graph itself not as a waste of time, but as "professional behavior". Good luck, and let us know how it works out. Breathe.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/90175", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/31414/" ] }
90,194
I work in a small development group with 3 developers. We are loosely managed and have no structure to the team. There is no designated team leader and the manager is fairly hands off. The Senior developer has been with the company for 4 years, in that time he has had a huge hand in setting up systems and keeping them running. He is not a very good developer but is a great cowboy coder and understands the network in a way I never could. He has taken the role of "lead developer" and "systems architect" because he has seniority and feels he is better at his job than the rest of us. My problem is that he threatens to quit all the time. Yesterday he informed me that in 6 weeks he will move up another level in the 401k vesting program and is planning to leave after that. When I asked him why, he said that it is because our manager (a man) and the team (the team being me) are “demasculating” him. He feels that he "deserves" to have been made the development manager based on his seniority. He doesn't like me because I keep pushing for things like bug/issue tracking software and because I am good at my job. Last time he threatened to quit I took him seriously and started to plan my work around him leaving. Then he changed his mind and told me that the work I was doing was his responsibility. He lost his temper with me and tensions ran really high for a while. Here are some of the different ways I have approached the situation: Just do what he asks: This lowers tensions but then nothing gets done and the users get upset. Take control and get stuff done: This keeps the users happy but then he gets angry at me and shuts down, he won't talk to me or work with me to get the things done that only he knows about. He won't give me access to the systems I need to get into to do it myself. Work more closely with higher management: He has no respect for higher management and they don't want him to leave the company so they coddle him. One option I haven’t moved forward with yet is to leave the company: I haven't been there a year yet and don't like the idea of leaving. Overall the job meets most of my requirements in a position. Ideas? Suggestions? Conversations? Options I haven’t considered? Update 5/11/2012: I finally decided to leave. It was a good decision. Between the original post and now he got better but still was not what I consider to be a good developer, much less good management material. I respect him for his knowledge but am glad I don't need to work with him any more.
Management won't change anything if they don't feel any pain. If you allow management to be hands off (by fixing things and being successful) then you will be expected to continue fixing things and being successful. After all -- from management's view -- things are fine. Stuff is getting done. You may feel stressed, but that's not what's important. What's important is that stuff is getting done. If you want change, you have to change. You have to make your co-worker into your manager's problem, not your problem. You need to make it your manager's problem when your co-worker makes demands and "nothing gets done and the users get upset". You need to make it your manager's problem when "he won't talk to me or work with me to get the things done that only he knows about. He won't give me access to the systems I need". Until someone else feels the pain, nothing can possibly change.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/90194", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/15574/" ] }
90,203
Everybody keeps saying that one of JavaScript's problems is using + [ example ] for string concatenation. Some say the problem is not using + , it's type coercion [see the comments from the previous example]. But strongly-typed languages use + for concatenation and coerce types without any problem. For example, in C#: int number = 1; MyObject obj = new MyObject(); var result = "Hello" + number + obj; // is equivalent to string result = "hello" + number.ToString() + obj.ToString(); So why is it that string concatenation in JavaScript is such a big problem?
Consider this piece of JavaScript code: var a = 10; var b = 20; console.log('result is ' + a + b); This will log result is 1020 Which most likely is not what was intended, and can be a hard to track bug.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/90203", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/2616/" ] }
90,217
I've been going through phpunit's docs and came accross the following quote: You can always write more tests. However, you will quickly find that only a fraction of the tests you can imagine are actually useful. What you want is to write tests that fail even though you think they should work, or tests that succeed even though you think they should fail. Another way to think of it is in cost/benefit terms. You want to write tests that will pay you back with information. --Erich Gamma It got me wondering. How do you determine what makes a unit test more useful than another, aside from what's stated in that quote about cost/benefit. How do you go about deciding which piece of your code you create unit tests for? I'm asking that because another of those quotes also said: So if it's not about testing, what's it about? It's about figuring out what you are trying to do before you run off half-cocked to try to do it. You write a specification that nails down a small aspect of behaviour in a concise, unambiguous, and executable form. It's that simple. Does that mean you write tests? No. It means you write specifications of what your code will have to do. It means you specify the behaviour of your code ahead of time. But not far ahead of time. In fact, just before you write the code is best because that's when you have as much information at hand as you will up to that point. Like well done TDD, you work in tiny increments... specifying one small aspect of behaviour at a time, then implementing it. When you realize that it's all about specifying behaviour and not writing tests, your point of view shifts. Suddenly the idea of having a Test class for each of your production classes is ridiculously limiting. And the thought of testing each of your methods with its own test method (in a 1-1 relationship) will be laughable. --Dave Astels The important section of that is * And the thought of testing each of your methods with its own test method (in a 1-1 relationship) will be laughable. * So if creating a test for each method is 'laughable', how/when do you chose what you write tests for?
How many tests per method? Well the theoretical and highly impractical maximum is the N-Path complexity (assume the tests all cover different ways through the code ;)). The minimum is ONE!. Per public method that is, he don't test implementation details, only external behaviors of a class (return values & calling other objects). You quote: *And the thought of testing each of your methods with its own test method (in a 1-1 relationship) will be laughable. * and then ask: So if creating a test for each method is 'laughable', how/when do you chose what you write tests for? But i think you misunderstood the author here: The idea of having one test method per one method in the class to test is what the author calls "laughable". (For me at least) It's not about about 'less' it's about 'more' So let me rephrase like i understood him: And the thought of testing each of your methods with ONLY ONE METHOD (its own test method in a 1-1 relationship) will be laughable. To quote your quote again: When you realize that it's all about specifying behaviour and not writing tests, your point of view shifts. When you practice TDD you don't think : I have a method calculateX($a, $b); and it needs a test testCalculcateX that tests EVERYTHING about the method. What TDD tells you is to think about what your code SHOULD DO like: I need to calculate the bigger of two values ( first test case! ) but if $a is smaller than zero then it should produce an error ( second test case! ) and if $b is smaller than zero it should .... ( third test case! ) and so on. You want to test behaviors, not just single methods without context. That way you get a test suite that is documentation for your code and REALLY explains what it is expected to do, maybe even why :) How do you go about deciding which piece of your code you create unit tests for? Well everything that ends up in the repository or anywhere near production needs a test. I don't think the author of your quotes would disagree with that as i tried to state in the above. If you don't have a test for it it gets way harder (more expensive) to change the code, especially if it's not you making the change. TDD is a way to ensure that you have tests for EVERYTHING but as long as you WRITE the tests it's fine. Usually writing them on the same day helps since you are not going to do it later, are you? :) Response to comments: a decent amount of methods can't be tested within a particular context because they either depend or are dependent upon other methods Well there are three thing those methods can call: Public methods of other classes We can mock out other classes so we have defined state there. We are in control of the context so thats not a problem there. * Protected or Private methods on the same * Anything that isn't part of the public API of a class doesn't get tested directly, usually. You want to test behavior and not implementation and if a class does all it's work in one big public method or in many smaller protected methods that get called is implementation . You want to be able to CHANGE those protected methods WITHOUT touching your tests. Because your tests will break if your code changes change behavior! Thats what your tests are there for, to tell you when you break something :) Public methods on the same class That doesn't happen very often does it? And if it does like in the following example there are a few ways of handling this: $stuff = new Stuff(); $stuff->setBla(12); $stuff->setFoo(14); $stuff->execute(); That the setters exist and are not part of the execute method signature is another topic ;) What we can test here is if executes does blow up when we set the wrong values. That setBla throws an exception when you pass a string can be tested separately but if we want to test that those two allowed values (12 & 14) don't work TOGETHER (for whatever reason) than thats one test case. If you want a "good" test suite you can, in php, maybe(!) add a @covers Stuff::execute annotation to make sure you only generate code coverage for this method and the other stuff that is just setup needs to be tested separately (again, if you want that). So the point is: Maybe you need to create some of the surrounding world first but you should be able to write meaningful test cases that usually only span one or maybe two real functions (setters don't count here). The rest can be ether mocked away or be tested first and then relied upon (see @depends ) *Note: The question was migrated from SO and initially was about PHP/PHPUnit, thats why the sample code and references are from the php world, i think this is also applicable to other languages as phpunit doesn't differ that much from other xUnit testing frameworks.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/90217", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/15824/" ] }
90,456
Suppose I'm building a blog that I want to have posts and comments. So I create two tables, a 'posts' table with an autoincrementing integer 'id' column, and a 'comments' table that has a foreign key 'post_id'. Then I want to run what will probably be my most common query, which is to retrieve a post and all of its comments. Being rather new to relational databases, the approach that appears most obvious to me is to write a query that would look something like: SELECT id, content, (SELECT * FROM comments WHERE post_id = 7) AS comments FROM posts WHERE id = 7 Which would give me the id and content of the post that I want, along with all the relevant comment rows packaged neatly in an array (a nested representation like you'd use in JSON). Of course, SQL and relational databases don't work like this, and the closest they can get is to do a join between 'posts' and 'comments' that will return a lot of unnecessary duplication of data (with the same post information repeated in every row), which means processing time is spent both on the database to put it all together and on my ORM to parse and undo it all. Even if I instruct my ORM to eagerly load the post's comments, the best it'll do is to dispatch one query for the post, and then a second query to retrieve all of the comments, and then put them together client-side, which is also inefficient. I understand that relational databases are proven technology (hell, they're older than I am), and that there's been a ton of research put into them over the decades, and I'm sure there's a really good reason why they (and the SQL standard) are designed to function the way they do, but I'm not sure why the approach I outlined above isn't possible. It seems to me to be the most simple and obvious way to implement one of the most basic relationships between records. Why don't relational databases offer something like this? (Disclaimer: I mostly write webapps using Rails and NoSQL datastores, but recently I've been trying out Postgres, and I actually like it a lot. I don't mean to attack relational databases, I'm just perplexed.) I'm not asking how to optimize a Rails app, or how to hack my way around this problem in a particular database. I'm asking why the SQL standard works this way when it seems counterintuitive and wasteful to me. There must be some historical reason why the original designers of SQL wanted their results to look like this.
C. J. Date goes into detail about this in Chapter 7 and Appendix B of SQL and Relational Theory . You're right, there's nothing in relational theory that prohibits an attribute's data type from being a relation itself, as long as it's the same relation type on every row. Your example would qualify. But Date says structures like this are "usually--but not invariably--contraindicated" (i.e. a Bad Idea) because hierarchies of relations are asymmetric . For example, a transformation from nested structure to a familiar "flat" structure cannot always be reversed to recreate the nesting. Queries, constraints, and updates are more complex, harder to write, and harder for the RDBMS to support if you allow relation-valued attributes (RVA's). It also muddies database design principles, because the best hierarchy of relations isn't so clear. Should we design a relation of Suppliers with a nested RVA for parts supplied by a given Supplier? Or a relation of Parts with a nested RVA for suppliers who supply a given Part? Or store both, to make it easy to run different types of queries? This is the same dilemma that results from the hierarchical database and the document-oriented database models. Eventually, the complexity and cost of accessing nested data structures drives designers to store data redundantly for easier lookup by different queries. The relational model discourages redundancy, so RVA's can work against the goals of relational modeling. From what I understand (I have not used them), Rel and Dataphor are RDBMS projects that support relation-valued attributes. Re comment from @dportas: Structured types are part of SQL-99, and Oracle supports these. But they don't store multiple tuples in the nested table per row of the base table. The common example is an "address" attribute which appears to be a single column of the base table, but has further sub-columns for street, city, postal code, etc. Nested tables are also supported by Oracle, and these do allow multiple tuples per row of the base table. But I am not aware that this is part of standard SQL. And keep in mind the conclusion of one blog: "I'll never use a nested table in a CREATE TABLE statement. You spend all of your time UN-NESTING them to make them useful again!"
{ "source": [ "https://softwareengineering.stackexchange.com/questions/90456", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/30555/" ] }
90,561
I don't know if I should be very irritated or what. I single handedly built over 300 queries for a large database, and developed a naming convention so I could find them later. No one else in my office even knows how to build a query, but I came in yesterday to find that all of them had been renamed. I am now having a very hard time finding things, and I am trying to figure out what to do. I spoke with the person responsible, and she just downplayed the whole thing. She said she renamed them so she can find them more easily. Unfortunately, I am the only one who knows how to build, edit, and maintain them, and the only reason she needed to find them was to test the queries. The new naming convention doesn't make sense at all, and I feel like we have taken a backwards step in the development process. What I'm trying to figure out is: 1) Am I overreacting? 2) What is the best way to handle this? I hate to mention this to my boss, but after speaking with my co-worker yesterday, I can already tell she feels like she did nothing wrong.
Why don't you simply handle it like adults: sit down, non-confrontationally, and come up with a list of pros and cons for a naming scheme , agree on one and make it official by writing a short document describing it. Elicit genuine interest in her input so she feels (and is) involved. If it's mostly a matter of taste and if she's the kind of person who absolutely has to have things her way, then just be glad that you're the bigger person and let it go. Life's too short to have a pissing contest of naming schemes. Is the problem the naming scheme or that you feel you don't get any respect? If so perhaps you can work on your working relation. If you feel it isn't worth it then why do you care about what she thinks anyways? :) Another option might be that she truly doesn't feel it's a big deal and if you explain nicely that you're having trouble finding stuff perhaps you can change it back.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/90561", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/-1/" ] }
90,637
Both these syntaxes are correct and equivalent: With brackets: foreach ($listOfThings as $thing) { echo $thing; } Without brackets: foreach ($listOfThings as $thing): echo $thing; endforeach; Two questions come up in mind: Is there any real difference between them or are they totally identical (performence, for example)? Is it better always to use the same one or to use both, depending of some context (for example: to recognize easely the code concerning the framework). I suspect all that to be a matter of taste and company standards, but I'm not sure of that.
You should always use brackets . You'll run across code (and even some frameworks!) that do no-brackets on short loops or ifs but when it comes down to it, there is no better way to make your code confusing and hard to read than by leaving off the brackets. There's also no significant performance hit either way, so far as I know. Those two extra keys can mean a world of difference to somebody reading your code.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/90637", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/26656/" ] }
90,808
I just started working with Django coming from years of Spring MVC and the forms implementation strikes as being slightly crazy. If you're not familiar, Django forms starts with a form model class that defines your fields. Spring similarly starts with a form-backing object. But where Spring provides a taglib for binding form elements to the backing object within your JSP, Django has form widgets tied directly to the model. There are default widgets where you can add style attributes to your fields to apply CSS or define completely custom widgets as new classes. It all goes in your python code. That seems nuts to me. First, you are putting information about your view directly in your model and secondly you are binding your model to a specific view. Am I missing something? EDIT: Some example code as requested. Django: # Class defines the data associated with this form class CommentForm(forms.Form): # name is CharField and the argument tells Django to use a <input type="text"> # and add the CSS class "special" as an attribute. The kind of thing that should # go in a template name = forms.CharField( widget=forms.TextInput(attrs={'class':'special'})) url = forms.URLField() # Again, comment is <input type="text" size="40" /> even though input box size # is a visual design constraint and not tied to the data model comment = forms.CharField( widget=forms.TextInput(attrs={'size':'40'})) Spring MVC: public class User { // Form class in this case is a POJO, passed to the template in the controller private String firstName; private String lastName; get/setWhatever() {} } <!-- JSP code references an instance of type User with custom tags --> <%@ taglib prefix="form" uri="http://www.springframework.org/tags/form" %> <!-- "user" is the name assigned to a User instance --> <form:form commandName="user"> <table> <tr> <td>First Name:</td> <!-- "path" attribute sets the name field and binds to object on backend --> <td><form:input path="firstName" class="special" /></td> </tr> <tr> <td>Last Name:</td> <td><form:input path="lastName" size="40" /></td> </tr> <tr> <td colspan="2"> <input type="submit" value="Save Changes" /> </td> </tr> </table> </form:form>
Yes, the Django forms is a mess from the MVC perspective, suppose you are working in a big MMO super-hero game and you are creating the Hero model: class Hero(models.Model): can_fly = models.BooleanField(default=False) has_laser = models.BooleanField(default=False) has_shark_repellent = models.BooleanField(default=False) Now you are asked to create a form for it, so that the MMO players can input their hero super powers: class HeroForm(forms.ModelForm): class Meta: model = Hero Since the Shark Repellent is a very powerful weapon, your boss asked you to limit it. If a hero has the Shark Repellent then he cannot fly. What most people do is simply add this business rule in the form clean and call it a day: class HeroForm(forms.ModelForm): class Meta: model = Hero def clean(self): cleaned_data = super(HeroForm, self).clean() if cleaned_data['has_shark_repellent'] and cleaned_data['can_fly']: raise ValidationError("You cannot fly and repel sharks!") This pattern looks cool and might work on small projects, but in my experience this is very hard to maintain in large projects with multiple developers. The problem is that the form is part of the view of the MVC. So you will have to remember that business rule every time you: Write another form that deals with the Hero model. Write a script that import heroes from another game. Manually change the model instance during the game mechanics. etc. My point here is that the forms.py is all about the form layout and presentation, you should never add business logic in that file unless you enjoy messing with spaghetti code. The best way to handle the hero problem is to use model clean method plus a custom signal. The model clean works like the form clean but its stored in the model itself, whenever the HeroForm is cleaned it automatically calls the Hero clean method. This is a good practice because if another developer writes a another form for the Hero he will get the repellent/fly validation for free. The problem with the clean is that it's called only when a Model is modified by a form. It's not called when you manually save() it and you can end-up with a invalid hero in your database. To counter this problem, you can add this listener to your project: from django.db.models.signals import pre_save def call_clean(sender, instance, **kwargs): instance.clean() pre_save.connect(call_clean, dispatch_uid='whata') This will call the clean method on each save() call for all your models.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/90808", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/24696/" ] }
90,815
In order to provide a more comprehensive educations for computer science students at my old university, I'm trying to develop an open source group they can contribute to. What should I be looking for in project ideas so that they provide the maximum educational value for students who aren't very experienced in real-world projects? What should I be avoiding when deciding what to do?
Yes, the Django forms is a mess from the MVC perspective, suppose you are working in a big MMO super-hero game and you are creating the Hero model: class Hero(models.Model): can_fly = models.BooleanField(default=False) has_laser = models.BooleanField(default=False) has_shark_repellent = models.BooleanField(default=False) Now you are asked to create a form for it, so that the MMO players can input their hero super powers: class HeroForm(forms.ModelForm): class Meta: model = Hero Since the Shark Repellent is a very powerful weapon, your boss asked you to limit it. If a hero has the Shark Repellent then he cannot fly. What most people do is simply add this business rule in the form clean and call it a day: class HeroForm(forms.ModelForm): class Meta: model = Hero def clean(self): cleaned_data = super(HeroForm, self).clean() if cleaned_data['has_shark_repellent'] and cleaned_data['can_fly']: raise ValidationError("You cannot fly and repel sharks!") This pattern looks cool and might work on small projects, but in my experience this is very hard to maintain in large projects with multiple developers. The problem is that the form is part of the view of the MVC. So you will have to remember that business rule every time you: Write another form that deals with the Hero model. Write a script that import heroes from another game. Manually change the model instance during the game mechanics. etc. My point here is that the forms.py is all about the form layout and presentation, you should never add business logic in that file unless you enjoy messing with spaghetti code. The best way to handle the hero problem is to use model clean method plus a custom signal. The model clean works like the form clean but its stored in the model itself, whenever the HeroForm is cleaned it automatically calls the Hero clean method. This is a good practice because if another developer writes a another form for the Hero he will get the repellent/fly validation for free. The problem with the clean is that it's called only when a Model is modified by a form. It's not called when you manually save() it and you can end-up with a invalid hero in your database. To counter this problem, you can add this listener to your project: from django.db.models.signals import pre_save def call_clean(sender, instance, **kwargs): instance.clean() pre_save.connect(call_clean, dispatch_uid='whata') This will call the clean method on each save() call for all your models.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/90815", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/30693/" ] }
90,954
Sometimes I spend ridiculous amounts of time (hours) agonizing over making code "look pretty". I mean making things look symmetrical. I will actually rapidly scroll through an entire class to see if anything jumps out as not looking "pretty" or "clean". Am I wasting my time? Is there any value in this kind of behavior? Sometimes the functionality or design of the code won't even change, I'll just re-structure it so it looks nicer. Am I just being totally OCD or is there some benefit hidden in this?
Use an auto-formatter. If you really are spending that much time manually editting the code, I would be willing to guess you are not very challenged/bored, because there is absolutely no reason for it. Ctrl+K, Cntrl+D in VS will format an entire document. You can use something like Style Cop if you want something a bit more heavyweight. It is good to have pride in your code, but not when it comes at the expense of being smart (looking for the most efficient solution. In this case, using a tool to automate a tedious process) and getting things done (what else could you have worked on during those hours?).
{ "source": [ "https://softwareengineering.stackexchange.com/questions/90954", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/4881/" ] }
91,018
I am involved with the development of a Windows application that has various screens. One of them takes ten seconds to appear with no spinner or other indication that the screen is loading. I consider this a serious performance issue but I seem to be the only one who is concerned. Am I being overzealous? What is an acceptable amount of time to wait for a screen to appear?
This is old research but 10 seconds is bad: http://www.useit.com/papers/responsetime.html from the page: The basic advice regarding response times has been about the same for thirty years [Miller 1968; Card et al. 1991]: •0.1 second is about the limit for having the user feel that the system is reacting instantaneously, meaning that no special feedback is necessary except to display the result. •1.0 second is about the limit for the user's flow of thought to stay uninterrupted, even though the user will notice the delay. Normally, no special feedback is necessary during delays of more than 0.1 but less than 1.0 second, but the user does lose the feeling of operating directly on the data. •10 seconds is about the limit for keeping the user's attention focused on the dialogue. For longer delays, users will want to perform other tasks while waiting for the computer to finish, so they should be given feedback indicating when the computer expects to be done. Feedback during the delay is especially important if the response time is likely to be highly variable, since users will then not know what to expect.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/91018", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/14439/" ] }
91,051
I just found myself writing the following comment in some (archaic Visual Basic 6.0) code I was writing: If WindowState <> 1 Then 'The form's not minimized, so we can resize it safely '... End if I'm not sure why I subconsciously use "we" in my comments. I suspect it's because I imagine someone stepping through the code, as if they were actually "doing" all of the commands on each line, rather than just watching them happen. With this mindset, I could have used I can resize it , since I'm the one "doing" it currently, or you can resize it , as if I were speaking to whomever is "doing" it in the future, but since both of those of these cases will most likely happen, I use "we" as if I'm leading someone else through my code. I can simply rewrite it as it can be resized and avoid the issue, but it's sparked my curiosity: is it common to use first person like this in comments, or is it considered distracting and/or unprofessional?
Comments should be written for human beings to understand. When human beings communicate, we typically use "I", "we", "you", etc. When someone is trying to understand some code, there are two or more actors: the person reading it, and the original author of the code. Saying "we" is fine. Unless by 'professional', you mean 'robot-like'.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/91051", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/30039/" ] }
91,158
I am pretty good with C++ , good as in I am comfortable with the language, I have read Accelerated C++ and done almost all the exercises. However, I have a big problem. Do I need to learn C ? I have never done C ever in my life. I just started with C++ when I started with programming. Probably cause I was always interested in knowing why everyone calls the language so complex. Now though I know the answer to that question ;) I am particularly interested in knowing whether I can survive without knowing C in today's world. Like if I give an interview in a company, if I tell them that I don't know C - will they take it as OK? The two languages I am good with is Python and C++. I am asking this cause I have heard that companies ask data structures in interviews. So if they ask me to implement it, and if I do it in C++, is it acceptable?
If you know C++, I wouldn't learn C just for the sake of it. You shouldn't find it too difficult to learn if and when you need it. I'd far rather meet someone who claims they know C++ but not C than someone who claims they know C/C++.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/91158", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/-1/" ] }
91,196
Possible Duplicate: When should you call yourself a senior developer? I often see offers for senior programmers and the rule of thumb seems to be that you need to have worked three years to become one. Duration doesn't mean much so what exactly is expected of a senior programmer? What should he be capable of doing? As an added question, when does a developer cease to be considered as a junior?
There is (much) more to Software Development than cranking out code. Mentoring Developers Leading Projects Designing Software This leads to: Responsibility A Junior Developer is still learning the language and frameworks on a daily basis, and thus should be focusing on this. A mentor (or experienced colleagues at least) is expected to guide him, especially the design decisions, and to check on him regularly (and not only his work), nudging in the right direction and sometimes helping out to smooth the path (for example doing the tricky work or handling the configuration/deployment). On the other a Senior Developer is expected to be autonomous. That is to say, the boss will hand him a feature to implement and let it up to him, entirely, knowing that the Senior Developer will know when to ask for help or review. The Senior Developer should also be able to tell his boss when he'll be done with the task, and actually take responsability for doing so. The difference therefore is not so much in expertise in the language, but in experience (in general) and reliability.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/91196", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/3170/" ] }
91,350
Regardless of programming language(s) or operating system(s) used or the environment they develop for, what should every programmer know? Some background: I'm interested in becoming the best programmer I can. As part of this process I'm trying to understand what I don't know and would benefit me a lot if I did. While there are loads of lists around along the lines of "n things every [insert programming language] developer should know", I have yet to find anything similar which isn't limited to a specific language. I also expect this information to be of interest and benefit to others.
How to swallow pride and admit mistakes without taking them personally.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/91350", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/1162/" ] }
91,527
I always liked to ask myself "what's the first principle(s) of this?" after I learned the basic stuff of something (e.g. programming). It's an inspiring question, IMO, that can force you to think about the most important principle(s) behind something, especially a skill such as programming. So, what do you think is the first principle(s) of programming? I'll give my answer below a little later.
KISS - Keep It Simple Stupid DRY - Don't Repeat Yourself YAGNI - You ain't gonna need it
{ "source": [ "https://softwareengineering.stackexchange.com/questions/91527", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/35660/" ] }
91,760
I'm going to be developing some functionality that will crawl various public web sites and process/aggregate the data on them. Nothing sinister like looking for e-mail addresses - in fact it's something that might actually drive additional traffic to their sites. But I digress. Other than honouring robots.txt , are there any rules or guidelines, written or unwritten, that I ought to be following in order to (a) avoid appearing malicious and potentially being banned, and (b) not cause any problems for the site owners/webmasters? Some examples I can think of which may or may not matter: Number of parallel requests Time between requests Time between entire crawls Avoiding potentially destructive links (don't want to be the Spider of Doom - but who knows if this is even practical) That's really just spit-balling, though; is there any tried-and-tested wisdom out there that's broadly applicable for anybody who intends to write or utilize a spider?
Besides obeying robots.txt, obey nofollow and noindex in <meta> elements and links: There are many who believe robots.txt is not the proper way to block indexing and because of that viewpoint, have instructed many site owners to rely on the <meta name="robots" content="noindex"> tag to tell web crawlers not to index a page. If you're trying to make a graph of connections between websites (anything similar to PageRank), (and <meta name="robots" content="nofollow"> ) is supposed to indicate the source site doesn't trust the destination site enough to give it a proper endorsement. So while you can index the destination site, you ought not store the relation between the two sites. SEO is more of an art than a real science, and it's practiced by a lot of people who know what they're doing, and a lot of people who read the executive summaries of people who know what they're doing. You're going to run into issues where you'll get blocked from sites for doing things that other sites found perfectly acceptable due to some rule someone overheard or read in a blog post on SEOmoz that may or may not be interpreted correctly. Because of that human element, unless you are Google, Microsoft, or Yahoo!, you are presumed malicious unless proven otherwise. You need to take extra care to act as though you are no threat to a web site owner, and act in accordance with how you would want a potentially malicious (but hopefully benign) crawler to act: stop crawling a site once you detect you're being blocked: 403/401s on pages you know work, throttling, time-outs, etc. avoid exhaustive crawls in relatively short periods of time: crawl a portion of the site, and come back later on (a few days later) to crawl another portion. Don't make parallel requests. avoid crawling potentially sensitive areas: URLs with /admin/ in them, for example. Even then, it's going to be an up-hill battle unless you resort to black-hat techniques like UA spoofing or purposely masking your crawling patterns: many site owners, for the same reasons above, will block an unknown crawler on sight instead of taking the chance that there's someone not trying to "hack their site". Prepare for a lot of failure. One thing you could do to combat the negative image an unknown crawler is going to have is to make it clear in your user-agent string who you are: Aarobot Crawler 0.9 created by John Doe. See http://example.com/aarobot.html for more information. Where http://example.com/aarobot.html explains what you're trying to accomplish and why you're not a threat. That page should have a few things: Information on how to contact you directly Information about what the crawler collects and why it's collecting it Information on how to opt-out and have any data collected deleted That last one is key: a good opt-out is like a Money Back Guarantee™ and scores an unreasonable amount of goodwill. It should be humane: one simple step (either an email address or, ideally, a form) and comprehensive (there shouldn't be any "gotchas": opt-out means you stop crawling without exception).
{ "source": [ "https://softwareengineering.stackexchange.com/questions/91760", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/3249/" ] }
91,799
If 'explicit is better than implicit', why aren't there explicit access modifiers in Python: Public, Protected, Private, etc.? I know that the idea is that the programmer should know what to do through a hint - no need to use 'brute force'. But IMO 'Encapsulation' or 'information hiding' isn't just to keep people out, it's a question of organization and structure: your development layers should have self-defining, clearly delimited scopes and borders, just like physical systems do. Can someone please help me out here with a solid explanation as to why access restrictions are implied rather than explicit in Python, a language that otherwise seems close to perfect? Edit: So far I have seen 3 proposed answers, and I realized that there are 2 parts to my question: Why aren't there key words, for example private def myFunc(): dostuff.... instead of IMO the ugly and hard to type underscores. But that's not the important point. More importantly: Why are these access modifiers only 'recommendations' or hints and not enforced. It will be hard to change later? It's very simple to change 'protected' to 'public' - and if you have a convoluted inheritance chain that makes it difficult, you have a poor design - your design should be refined rather than relying on a language feature that makes it easy to write poorly structured code. When access modifiers are enforced, your code is automatically compartmentalized - you KNOW that certain segments are out of scope so you don't have to deal with them except if and when it's necessary. And, if your design is no good and you find yourself constantly moving things into and out of different scopes, the language can help you to clean up your act. As much as I love Python, I'm finding this 2nd point to be a serious deficiency. And I have yet to see a good answer for this.
"Explicit is better than implicit" is only one of the maxims in Python's design philosophy. "Simple is better than complex" is there too. And, although it's not in the Zen of Python, "We're all consenting adults here" is another. That second rule is perhaps the most important here. When I design a class, I have some idea of how it's going to be used. But I can't possibly predict all possible uses. It may be that some future use of my code requires access to the variables I've thought of as private. Why should I make it hard - or even impossible - to access these, if a future programmer (or even a future me) needs them? The best thing to do is to mark them with a warning - as Joonas notes, a single underscore prefix is the standard - that they are internal, and might change; but forbidding access altogether seems unnecessary.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/91799", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/26621/" ] }
91,813
Some say that DBaaS (database as a service) -- aka cloud database -- is not suitable for Business Intelligence (BI), analytics (OLAP) or archiving. Is this true? More generally, when DBaaS is the most effective technical choice?
"Explicit is better than implicit" is only one of the maxims in Python's design philosophy. "Simple is better than complex" is there too. And, although it's not in the Zen of Python, "We're all consenting adults here" is another. That second rule is perhaps the most important here. When I design a class, I have some idea of how it's going to be used. But I can't possibly predict all possible uses. It may be that some future use of my code requires access to the variables I've thought of as private. Why should I make it hard - or even impossible - to access these, if a future programmer (or even a future me) needs them? The best thing to do is to mark them with a warning - as Joonas notes, a single underscore prefix is the standard - that they are internal, and might change; but forbidding access altogether seems unnecessary.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/91813", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/30611/" ] }
91,854
Do you know that feeling when you just need to show off that new trick with Expression s or generalize three different procedures? This does not have to be on Architecture Astronaut scale and in fact may be helpful but I can't help but notice someone else would implement the same class or package in a more clear, straightforward (and sometimes boring) manner. I noticed I often design programs by oversolving the problem , sometimes deliberately and sometimes out of boredom. In either case, I usually honestly believe my solution is crystal clear and elegant, until I see evidence to the contrary but it's usually too late. There is also a part of me that prefers undocumented assumptions to code duplication, and cleverness to simplicity. What can I do to resist the urge to write “cleverish” code and when should the bell ring that I am Doing It Wrong ? The problem is getting even more pushing as I'm now working with a team of experienced developers, and sometimes my attempts at writing smart code seem foolish even to myself after time dispels the illusion of elegance.
The problem is getting even more pushing as I'm now working with a team of experienced developers, and sometimes my attempts at writing smart code seem foolish even to myself after time dispels the illusion of elegance. Your solution lies here. I'm presuming that "experienced" in this context means "more experienced than you." At the very least, you clearly respect them. This is a valuable learning opportunity -- assuming your ego can take the hit. (Irksome things, egos. A pity we need them so.) Do you have code reviews with these folks? If so, if they're not doing it already, explicitly ask them to call you on your bullshit. Mention that you've noticed a tendency in yourself to overdesign, to use a meticulously designed top-of-the-line pneumatic jackhammer (preferably wielded by some sort of automated road-worker android) when a simple claw hammer would be more than sufficient. You may often find yourself squirming in your seat while your face turns red during the code reviews. Endure it. You're learning. Then, once you've got a few of these under your belt, pay attention to the moments where you suspect you're maybe possibly overdesigning. When those moments come, ask yourself: "If somebody calls me out on this during code review, can I defend my solution as the best one available? Or is there a simpler solution I'm forsaking?" Sometimes, peer review is the best way to get a good look at your own work.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/91854", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/3939/" ] }
91,976
Occasionally I see questions about edge cases and other weirdness on Stack Overflow that are easily answered by the likes of Jon Skeet and Eric Lippert, demonstrating a deep knowledge of the language and its many intricacies, like this one: You might think that in order to use a foreach loop, the collection you are iterating over must implement IEnumerable or IEnumerable<T> . But as it turns out, that is not actually a requirement. What is required is that the type of the collection must have a public method called GetEnumerator , and that must return some type that has a public property getter called Current and a public method MoveNext that returns a bool . If the compiler can determine that all of those requirements are met then the code is generated to use those methods. Only if those requirements are not met do we check to see if the object implements IEnumerable or IEnumerable<T> . That's cool stuff to know. I can understand why Eric knows this; he's on the compiler team, so he has to know. But what about those who demonstrate such deep knowledge who are not insiders? How do mere mortals (who are not on the C# compiler team) find out about stuff like this? Specifically, are there methods these folks use to systematically root out such knowledge, explore it and internalize it (make it their own)?
First off, thanks for the kind words. If you want to get a deep knowledge of C# it is undoubtedly an advantage to have the language specification, ten years of design notes, the source code, the bug database, and Anders, Mads, Scott and Peter just down the hall. I'm certainly fortunate, no question about it. However, even without those advantages it is still possible to get a deep knowledge of the subject. Back when I started at Microsoft I was working on the JScript interpreter that shipped with Internet Explorer 3. My manager at the time told me something that was some of the best advice I've ever gotten. He said that he wanted me to become the recognized expert at Microsoft on the syntax and semantics of the JScript language, and that I should go about this by seeking out questions on those aspects of JScript and answering them. Particularly answering the questions I didn't know the answers to, because those are the ones I would learn from. Obviously StackOverflow and other public Q&A forums are like drinking from a firehose for that sort of thing. Back then, I read comp.lang.javascript and our internal Microsoft "JS User" forums religiously and followed my manager's advice: when I saw a question that was about the language semantics that I didn't know the answer to, I made it my business to find out. If you want to do a "deep dive" like that, you've got to choose carefully. I to this day am remarkably ignorant of how the browser object model works. Since I have been concentrating on becoming the C# language expert these last years, I am remarkably ignorant of how the various classes in the base class libraries work. I am fortunate in that I have a job that prizes specific deep knowledge; if your job or your talents are more in line with being a generalist, going deep might not work for you. Writing a blog is also tremendously helpful; by requiring me to explain complex topics to other people, I am forced to confront my own inadequate understanding of various topics all the time.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/91976", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/1204/" ] }
92,072
How much experience do you need in a language before you can put it on your resume? There is one language I'm in proficient in (Java) which I would definitely put on the resume but say I took I couple of semester courses in college which involved extensive programming in C or self-taught myself C# but have written no meaningful projects in it, can I put those languages on the resume without having the employer laugh at it or perceive it as resume inflation?
You can always list languages (as well as other skills) in tiers on a resume: Proficient: Java, Lisp Familiar with: Perl, C++ As a guy looking at a resume, I will appreciate both the honesty and the effort; and when bringing you in the first question you usually get asked is to rate your individual language proficiency more precisely, orally or on paper.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/92072", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/-1/" ] }
92,186
Question should be clear from its title. For example Apache saves its access and error logs in files instead of RDBMS no matter on how large or small scale it is being utilized. For RDMS we just have to write SQL queries and it will do the work while for files we must decide a particular format and then write regex or may be parsers to manipulate them. And those might even fail in particular circumstances if great care was not paid. Yet everyone seems to prefer filesystem for maintaining the logs. I am not biased against any of these methods but I would like to know why it is practiced like this. Is it speed or maintainability or something else?
Too many things can fail with the database and logging these failures is important too. Unless you have a database system allowing autonomous transactions (or no transactions at all), logging would require a separate connection so a rollback or commit in logging doesn't interfere with rollback or commit in the application. Many things worth logging happen during startup, i.e. possibly before the database connection has been established. In what could be a typical setup, a new logfile is created every day, old log files are compressed and kept for 2 weeks, before eventually being deleted. It's not easy to do the same in an RDBMS.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/92186", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/26831/" ] }
92,248
We are currently investigating automated user interface testing (we currently do automated unit and integration testing). We've looked at Selenium and Telerik and have settled on the latter as the tool of choice due to its much more flexible recorder - and we don't really want testers writing too much code. However, I am trying to understand the overall benefit. What are peoples' views and what sort of things work well and what doesn't? Our system is under constant development and we regularly release new versions of our (web based) platform. So far the main benefit we can see is for regression testing, especially across multiple client deployments of our platform. Really looking for other people's views. We "think" it is the right thing to do but in an already busy schedule are looking for some additional insight.
When my team implemented automated UI testing a lot of great things happened. First, the QA team became much more efficient at testing the application as well as more proficient with the application. The lead QA said that he was able to bring new QA members up to speed quickly by introducing them to the test suites for the UI. Second, the quality of QA tickets that came back to the Dev team were better. Instead of 'Page broke when I clicked Submit button' we got the exact case that failed so we could see what was input into the form. The QA team also took it a step further by checking all cases that failed and tested other scenarios around that page to give us a better view of what happened. Third, the QA team had more time. With this extra time, they were able to sit in on more design meetings. This in turn allowed them to be writing the new test suite cases at the same time as the Devs were coding those new features. Also, the stress testing that the test suite we used was worth it's weight in gold. It honestly helped me sleep better at night knowing that our app could take pretty much anything thrown at it. We found quite a few pages that bucked under pressure that we were able to fix before go live. Just perfect. The last thing that we found was that with some tweaks by the QA team, we could also do some SQL injection testing on our app. We found some vulnerabilities that we were able to get fixed up quickly. The setup of the UI test suite took a good amount of time. But, once it was there it became a central part of our development process.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/92248", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/30622/" ] }
92,337
There are lots of other APIs I need to use besides the Selenium test tool to be able to get tests working. Not using them for just one week and the mind has lost all of them. How is it possible to remember zillions of APIs?
You don't have to memorize a zillion functions. You just have to know how to look stuff up in the API documentation.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/92337", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/21308/" ] }
92,339
What are your rules of thumb for when to use structs vs. classes? I'm thinking of the C# definition of those terms but if your language has similar concepts I'd like to hear your opinion as well. I tend to use classes for almost everything, and use structs only when something is very simplistic and should be a value type, such as a PhoneNumber or something like that. But this seems like a relatively minor use and I hope there are more interesting use cases.
The general rule to follow is that structs should be small, simple (one-level) collections of related properties, that are immutable once created; for anything else, use a class. C# is nice in that structs and classes have no explicit differences in declaration other than the defining keyword; so, if you feel you need to "upgrade" a struct to a class, or conversely "downgrade" a class to a struct, it's mostly a simple matter of changing the keyword (there are a few other gotchas; structs can't derive from any other class or struct type, and they can't explicitly define a default parameterless constructor). I say "mostly", because the more important thing to know about structs is that, because they are value types, treating them like classes (reference types) can end up a pain and a half. Particularly, making a structure's properties mutable can cause unexpected behavior. For example, say you have a class SimpleClass with two properties, A and B. You instantiate a copy of this class, initialize A and B, and then pass the instance to another method. That method further modifies A and B. Back in the calling function (the one that created the instance), your instance's A and B will have the values given to them by the called method. Now, you make it a struct. The properties are still mutable. You perform the same operations with the same syntax as before, but now, A and B's new values aren't in the instance after calling the method. What happened? Well, your class is now a struct, meaning it's a value type. If you pass a value type to a method, the default (without an out or ref keyword) is to pass "by value"; a shallow copy of the instance is created for use by the method, and then destroyed when the method is done leaving the initial instance intact. This becomes even more confusing if you were to have a reference type as a member of your struct (not disallowed, but extremely bad practice in virtually all cases); the class would not be cloned (only the struct's reference to it), so changes to the struct would not affect the original object, but changes to the struct's subclass WILL affect the instance from the calling code. This can very easily put mutable structs in very inconsistent states that can cause errors a long way away from where the real problem is. For this reason, virtually every authority on C# says to always make your structures immutable; allow the consumer to specify the properties' values only on construction of an object, and never provide any means to change that instance's values. Readonly fields, or get-only properties, are the rule. If the consumer wants to change the value, they can create a new object based on the values of the old one, with the changes they want, or they can call a method which will do the same. This forces them to treat a single instance of your struct as one conceptual "value", indivisible and distinct from (but possibly equatable to) all others. If they perform an operation on a "value" stored by your type, they get a new "value" which is different from their initial value, but still comparable and/or semantically equatable. For a good example, look at the DateTime type. You cannot assign any of the fields of a DateTime instance directly; you must either create a new one, or call a method on the existing one which will produce a new instance. This is because a date and time are a "value", like the number 5, and a change to the number 5 results in a new value that is not 5. Just because 5+1 = 6 doesn't mean 5 is now 6 because you added 1 to it. DateTimes work the same way; 12:00 does not "become" 12:01 if you add a minute, you instead get a new value 12:01 that is distinct from 12:00. If this is a logical state of affairs for your type (good conceptual examples that aren't built in to .NET are Money, Distance, Weight, and other quantities of a UOM where operations must take all parts of the value into account), then use a struct and design it accordingly. In most other cases where the sub-items of an object should be independently mutable, use a class.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/92339", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/3124/" ] }
92,380
When reading various Stack Overflow questions and others' code the general consensus of how to design classes is closed. This means that by default in Java and C# everything is private, fields are final, some methods are final, and sometimes classes are even final . The idea behind this is to hide implementation details, which is a very good reason. However with the existence of protected in most OOP languages and polymorphism, this doesn't work. Every time I wish to add or change functionality to a class I'm usually hindered by private and final placed everywhere. Here implementation details matter: you're taking the implementation and extending it, knowing full well what the consequences are. However because I can't get access to private and final fields and methods, I have three options: Don't extend the class, just work around the problem leading to code that's more complex Copy and paste the whole class, killing code reusability Fork the project Those aren't good options. Why isn't protected used in projects written in languages that support it? Why do some projects explicitly prohibit inheriting from their classes?
Designing classes to work properly when extended, especially when the programmer doing the extending doesn't fully understand how the class is supposed to work, takes considerable extra effort. You can't just take everything that's private and make it public (or protected) and call that "open." If you allow someone else to change the value of a variable, you have to consider how all the possible values will affect the class. (What if they set the variable to null? An empty array? A negative number?) The same applies when allowing other people to call a method. It takes careful thought. So it's not so much that classes shouldn't be open, but that sometimes it's not worth the effort to make them open. Of course, it's also possible that the library authors were just being lazy. Depends on which library you're talking about. :-)
{ "source": [ "https://softwareengineering.stackexchange.com/questions/92380", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/66/" ] }
92,393
So, I'm starting a brand-new project in Java, and am considering using Spring. Why am I considering Spring? Because lots of people tell me I should use Spring! Seriously, any time I've tried to get people to explain what exactly Spring is or what it does, they can never give me a straight answer. I've checked the intros on the SpringSource site, and they're either really complicated or really tutorial-focused, and none of them give me a good idea of why I should be using it, or how it will make my life easier. Sometimes people throw around the term "dependency injection", which just confuses me even more, because I think I have a different understanding of what that term means. Anyway, here's a little about my background and my app : Been developing in Java for a while, doing back-end web development. Yes, I do a ton of unit testing. To facilitate this, I typically make (at least) two versions of a method : one that uses instance variables, and one that only uses variables that are passed in to the method. The one that uses instance variables calls the other one, supplying the instance variables. When it comes time to unit test, I use Mockito to mock up the objects and then make calls to the method that doesn't use instance variables. This is what I've always understood "dependency injection" to be. My app is pretty simple, from a CS perspective. Small project, 1-2 developers to start with. Mostly CRUD-type operations with a a bunch of search thrown in. Basically a bunch of RESTful web services, plus a web front-end and then eventually some mobile clients. I'm thinking of doing the front-end in straight HTML/CSS/JS/JQuery, so no real plans to use JSP. Using Hibernate as an ORM, and Jersey to implement the webservices. I've already started coding, and am really eager to get a demo out there that I can shop around and see if anyone wants to invest. So obviously time is of the essence. I understand Spring has quite the learning curve, plus it looks like it necessitates a whole bunch of XML configuration, which I typically try to avoid like the plague. But if it can make my life easier and (especially) if make it can make development and testing faster, I'm willing to bite the bullet and learn Spring. So please. Educate me. Should I use Spring? Why or why not?
What does the Spring framework do? Should I use it? Why or why not? Spring is a framework that helps you to "wire" different components together. It is most useful in cases where you have a lot of components and you might decide to combine them in different ways, or wish to make it easy to swap out one component for another depending on different settings or environments. This is what I've always understood "dependency injection" to be. I would suggest a different definition: "Design your objects so that they rely on an outside force to supply them with what they need, with the expectation that these dependencies are always injected before anybody asks them to start doing their usual jobs." Compare that against: "Each object is responsible for going out and finding everything and everybody it needs as it starts up." it looks like it necessitates a whole bunch of XML configuration Well, most of the XML (or annotation-based) stuff is telling Spring stuff like: When someone asks for "HammerStore", I want you to create an instance of example.HammerStore and return it. Cache the instance for next time, since there only needs to be one store. When someone asks for "SomeHammer", I want you to ask yourself for a "HammerStore", and return the result of the store's makeHammer() method. Do not cache this result. When someone asks for "SomeWrench", I want you to create an instance of example.WrenchImpl , Use the configuration setting gaugeAmount and put it into the instance's setWrenchSize() property. Do not cache the result. When someone asks for "LocalPlumber", I want to you create an instance of example.PlumberImpl . Put the string "Pedro" into its setName() method, put a "SomeHammer" into its setHammer() method, and put a "SomeWrench" into its setWrench() method. Return the result, and cache the result for later since we only need one plumber. In this way, Spring lets your connect components, label them, control their lifecycles/caching, and alter behavior based on configuration. To facilitate [testing] I typically make (at least) two versions of a method : one that uses instance variables, and one that only uses variables that are passed in to the method. That sounds like a lot of overhead for not a lot of benefit for me. Instead, make your instance variables have protected or package visibility , and locate the unit tests inside the same com.mycompany.whatever package. That way you can inspect and change the instance variables whenever you want during testing.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/92393", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/24173/" ] }
92,508
I work at a place that is CVS-crazy and Bugzilla-nuts. There are so many branches off each release that one cannot count them. Everyone is constantly auto-merging. There is no fluidity at this job. Everything feels lock-step . It takes 25 steps even for a simple thing. It's not like being on a factory production line: it's like setting up a factory myself every day. Example situation: To fix a single bug, first I obtain a clean, new virtual machine. Then I create a branch for that single bug fix, based on another branch described in the Bugzilla report. I install the branch on the machine, set that up. I fix the bug. I check it in, leaving it and the machine for others to test with. Then I have to go into the bug control software and explain what I did and write a test case, with all the steps. Eventually someone else merges it with a release. No matter how tiny the bug is, I have to do all of these things. Sometimes people combine work on multiple bugs, but as I said there are so many branches that this is hardly possible. At any other job, I'd just go in and fix the bug. I barely even remember using SCM, although every job I've had has used it: that's because at every other job, they somehow kept it out of the way . Is there a point at which the process gets in the way and becomes an end unto itself? Is that even engineering?
Is there a point at which the process gets in the way and becomes an end unto itself? Heavy processes are common, unfortunately. Some people - especially management - religiously imagine that processes produce products. So they overdo the processes and forget that it's really a handful of hard-working, smart people who actually create the products. For upper management, it's frightening to even think that their business is in the hands of few geeks, and so the close their eyes from the reality and think of their dear "process" instead, which gives them the illusion of control. That's why agile startups with a handful of good engineers can beat big, established corporations, whose workers spend 95 % of their energy on process and reporting. Some examples of once small startups that once did beat their competitors and/or created completely new markets: Apple (Apple I was created by 1 engineer; there were 3 men at the company back then). Google (created originally by 2 programmers). Facebook ( 1 -man effort originally). Microsoft ( 2 -man company in 1975). One could easily say that these are just outliers, extreme exceptions, and to do something serious, you'd better be a big, established corporation. But the list goes on. And on. It's embarrassingly long. Almost every today-major corporation started as a garage shop, which did something unusual. Something weird. They were doing it wrong. Do you think they were doing it according to the process?
{ "source": [ "https://softwareengineering.stackexchange.com/questions/92508", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/30192/" ] }
92,619
I have a potential customer who has an idea for an ipad application but is unable to find sufficient fundings for this. One idea that came up is that I do the work either for free or for a minor fee and then receive a percentage of the income from appstore. How do I decide what percentage is realistic? How is this affected by the price in appstore and how do I protect myself from the scenario where the customer suddenly decides to offer the app for free?
Ideas are cheap. It's implementation that really matters. If he's not commissioning you to create the app for him and paying you what your time is worth, I would give him a very meager cut of the profits. Certainly less than 50%.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/92619", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/11921/" ] }
92,862
A little background first. I'm a project manager at medium-sized company. I started as a CS major and had a little exposure to programming, but after a few months I knew it's not my path, so I switched over to management. That proved to be a good decision, and after graduating I've worked in software management at various companies (for 5 years now). Recently, we had a very painful project. It was the worst of the worst, with many mistakes both on our side and on the customers side and just barely ending it without losses. It has led to many frustrating situations, one of which escalated to the point where one of our senior developers left company after a vocal argument with us (the management). This was a red flag for me: I did something terribly wrong. (for the record, the argument was about several mistaken time estimates) I searched many places for answers and a friend pointed me to this site. There are many questions here about frustrations with management. I can understand that the general bad experiences lead to a general reluctance against "those guys in the suits". I'm that guy in the suit. It may not look like it, but all I want is a successful project, and with limited resources it takes painful decisions. That's my job. One of the things the aforementioned senior developer complained about was work equipment. Frankly, I had no idea that the computers we had were not suited for working. After this, I asked many programmers and the general consensus was that we need better machines. I fixed that since then, but there was obviously a huge communication gap between me and the programmers. Some of the most brilliant developers are the most shy and silent people. I know that, and it was never a problem during an interview. People are different, and have strengths at different areas. The case of the underpowered PCs is just one of the many that led me to thinking that there is a communication issue. How can I improve communication with programmers without being intimidating and repetitive? What I'm hoping is that people don't complain about good things. If you love your workplace and love (or at least like :)) your manager, please tell me about them. What are they doing right? Similarly, if you hate it, please describe in detail why. I'm looking for answers about improving communication because I think that is my problem, but I might be wrong.
Wow! Thanks for asking. Technically, like you, I guess I'm management, since I spend much more time communicating and leading teams than I do writing code.... but here's my take from both ends of the management horizon. Whether I'm a developer or a manager working for another manager, here's some stuff that helps in my communication with my management: Why? is a very important question - almost any factual answer has a "why" behind it and that "why" may well be more important than the actual question. There's a few tangents to this: The Developer Why - Developers will have a lot of answers that make absolutely no sense to management. I certainly did, and one of the ways I got into management was being really good at explaining the teams "whys" in terms management could understand. If you don't have a "speaker for the geeks" on hand, you can learn to speak geek by restating their answers to the why question in more commonly understood metaphors. Keep at it until you both understand and agree on what's going on. The Management Why - your "why" is just as important. Why do you need the time estimates? What are you using them for? How screwed will we be as a company if they are too high or too low? This is stuff that you as a manager probably have keen insights into, but this is all voodoo to the developer. The trick is, the developer may not ask. Management has asked him for something, and he's going to do the best he can to be accurate, timely and thoughtful - but if he doesn't know the "why" he may optimize in ways you would rather he didn't. Offer your why, and ask for him to do the same thing - restate the answer in his own terms. Similarly - go into the why of the business - often developers care but have no direct knowledge of how the business development works - having someone volunteer this information is both motivating and enlightening. About time estimates specifically - I've had to do a ton, and I have absolutely profited from telling my development team "I need this because I am going to ask for more money to pay our salaries, I will trust what you tell me and I will use your numbers... that means that if you low ball on me, we are all screwed because I will not be able to ask for money a second time - we have to live with what we propose". With that context, developers changed from low estimates that attempted to show me how confident and brilliant they were to high estimates that came a lot closer to real expectation setting. No one is wrong - The "why" question goes a long with a corollary - asking "why" instead of saying "that's outrageous! No way!" keeps the conversation flowing. Sometimes there is a severe disconnect between what someone things is being asked and what the asker is asking. My best management has been horribly surprised by my answers, and when surprised, they blink in astonishment and then ask "why do you say that?". They do not say (immediately) - "you need to change it". I have reduced numbers on proposals to meet a competitive goal, but only after talking intensely about how we could change the scene and create a different context for the question. listen for ambient noise, word choice and the space between the words - Here's a bunch of things I have liked and stolen to use myself: hang out in the work area, do something productive of your own (don't try to get in developer work, they know you're not a developer) and just listen. How does the team solve problems? What are their big problems? You will never hear the real skinny on their direct assessment of you or management at large, but you may get a really good sense of where problem areas are. Make sure you're doing something of your own that is productive. No one likes spying, but unless morale is so low that you can't be near them with out everyone evacuating, productivity within proximity should be tolerable. look for word choices - they are often as important as the words themselves. When I've used particularly positive or negative words, my management frequently asks me why I chose those words when it's a situation they aren't familiar with. look for pauses, gaps and body language. If there's a power distance between yourself and the developers (and it sounds like there is) they may not want to contradict or confront you. But the basic instinct to say "hey, you're wrong" usually manifests itself somewhere. open up to as many communication mediums as possible - be ready to chat in person, on the phone, by email, by IM - anything and everything to establish the flow of communication. People are so diverse that just one trick won't work. And I see it as the manager's job to be the multi-format communicator, not the developer's. make it worthwhile talking to you If someone tells you about a problem and maybe a possible solution, he should and probably accept that you are the manager and therefor may decide in favor of a different solution, or no solution at all because you don't think it worth the trouble. But after the third time this happens, especially if it happens without an explanation about 99% of the people will stop telling you anything. And here's one that's incredibly hard for me, but has worked great when I can do it - be aware of the difference between introverts and extroverts . Chances are that you are an extrovert - that's why your job seemed good and a development position did not. Developers are, for the most part, introverts. "Introvert" does not mean "can't communicate", but it means that their pattern, process and velocity are significantly different and the urge to communicate incessantly is virtually non-existent. Plan in the time and quiet (but collocated) space to let introvert based thoughts to come out. Many of my introvert friends tell me they are just waiting for me to "shut up for like 5 minutes" so they can put together a thought and respond. Here's a few great articles on the same thing - 5 things extroverts should know about introverts and Rands in Repose on the Nerd Cave - a particularly developer-tastic example of what's great for introverts . Rands is pretty fantastic, by the way. He's a geek himself, so he comes at it from a developer-focus, which can be off putting if that's not your style, but he's funny and has some really good insights on team development. I think the #1 things I have loved about my favorite managers was: they were as deeply committed and excited about the project as I was (if not more) I never had a doubt that they had my back - I knew with certainty that when they were in front of the next level of authority that I (or my peers) would never be the scape goat. It would always be a group failure, if there was failure. I was given ownership of something significant and appropriate for my skills at the time, but with enough resources to expand my skills and get the job done. they saw me as both an individual and as part of the team - they were actively engaged in knowing my strengths and weaknesses and working to help me play on my strengths and augment my weaknesses. they were aware of my personal goals and interested incorporating them as much as they could they were upfront when making me happy couldn't and wouldn't be a priority. There's a real value in hearing "I know you hate this type of work, but I need you to do it - here's how it won't be forever...". there was always time in a week (maybe not at the instant) to explain the big picture there was near constant feedback and status with no finger pointing but plenty of recognition for individual work. there was always the truth. If something was sensitive and couldn't be discussed, they said so point blank. If something was uncertain, they gave a level of confidence.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/92862", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/31332/" ] }
93,124
I'm thinking to use an entity-attribute-value (EAV) model for some of the stuff in one of the projects, but all questions about it in Stack Overflow end up to answers calling EAV an anti-pattern. But I'm wondering if it is that wrong in all cases. Let's say shop product entity, it has common features, such as name, description, image, and price, that take part in logic many places and has (semi)unique features, like watch and beach ball would be described by completely different aspects. So I think EAV would fit for storing those (semi)unique features. All this is assuming, that for showing product list, it is enough info in product table (that means no EAV is involved) and just when showing one product/comparing up to 5 products/etc. data saved using EAV is used. I've seen such approach in Magento commerce and it is quite popular, so are there cases when EAV is reasonable?
https://web.archive.org/web/20140831134758/http://www.dbforums.com/database-concepts-design/1619660-otlt-eav-design-why-do-people-hate.html EAV gives a flexibility to the developer to define the schema as needed and this is good in some circumstances. On the other hand it performs very poorly in the case of an ill-defined query and can support other bad practices. In other words, EAV gives you enough rope to hang yourself and in this industry, things should be designed to the lowest level of complexity because the guy replacing you on the project will likely be an idiot.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/93124", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/23435/" ] }
93,138
I am thinking of doing a short presentation at work about using Stack Overflow as a resource for your day job. What is your experience doing this? Would you deem it a valid resource to tell your colleagues about it or is it similar to telling them about Google as a resource? Is there a better way of doing it? I was leaning toward asking questions side of Stack Overflow rather than answering them to avoid you-shouldn't-be-doing-this-on-work-time argument. Just as a follow up. Originally I didn't want to make the question too specific to my own case. My presentation will only be a quick four minute talk, which I will repeat over an hour to different groups. I may ask a question before the talk on Stack Overflow and refer to it during the presentation. Hopefully I will get some activity during the hour. I am also going to talk briefly about some of the other Stack Exchange sites that would fit the audience as they are not all developers. I think Super User, Server Fault and Programmers should work well. I will not be doing the presentation for another couple of months as it has been rescheduled, but I will update on how I got on.
Key points: Registration is easy. It's free Quality answers. I would suggest your group create a question during the presentation (do a search first.). If you don't get a response before the presentation/meeting is over, keep everyone posted via email and follow-up if necessary. They'll be impressed with the quality and speed of the responses. Compare to a Google search. You could also prepare a question in advance. It really will be up to them whether they'll use it or not. If you find the group in a major debate, try putting it on Stack Overflow as a follow-up to your presentation. Everyone may not see the need instantly. Keep at it. My current company was using a paid site. I never bothered to get an account because of Stack Overflow. Time can be perceived as wasted if you spend too much time on Stack Overflow. I'd rather have people get involved. You learn just as much by answering questions in my opinion. It just may prompt you about a issue you never considered.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/93138", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/64/" ] }
93,201
When talking about IDE software or about what a programming language allows you to do or not at the source level, I often use the word IntelliSense , which has a precise meaning in the Microsoft world, but is inappropriate when talking to people who don't have to be familiar with Visual Studio. In this case, what is the appropriate term to use? I usually use the term "auto-completion", but it doesn't always work. In fact, IntelliSense includes auto-completion, but it also provides documentation and hints.
We have always called it "Auto Code Completion" or just "Code Completion". I have heard the term "code hinting" used as well.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/93201", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/6605/" ] }
93,227
I'm currently working on a project with a new programmer. How do I help him speed up his work? He often asks me questions, and I pair programmed with him in backbone.js (a part of the project). Now I want him to handle the project himself so I can concentrate on other things to speed up the process. He doesn't want to Google things or ask on a forum if a problem comes up. He just comes to me. What should he do? What should I do? When I force him, then he does things quickly. How can I motivate him to do more work on his own?
Say " I'm a bit busy right now, you can ask on stackoverflow.com if you're really stuck. " Eventually he will hopefully get the clue. Also, next time he comes to your desk say "Hmm I don't know, let's Google that and see..." or "Let's check the API docs." The combination of these two has worked for me with co-op students in the past - eventually they see how I search and find information, then they learn how to do it as well.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/93227", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/7702/" ] }
93,302
Yesterday, I rolled out a v1.0 release of a Web project I've spent about 6 weeks working on (on and off, that is). I haven't made any exact records of my time, but according to my experiences I would estimate that out of all the time I spent programming, half of it was spent debugging. I estimate that to be about a good 15-20 hours spend debugging, which to me is precious time that could have better been spent writing new code or finishing the project earlier. It also especially doesn't help that I'll be a freshman in college in 5 weeks. The thing is, I feel bad for spending all that time debugging. All that time spent debugging makes me realize that I made some pretty stupid mistakes while I was developing my project, mistakes that cost me a damn good amount of time to fix. How can I prevent this from happening in the future? I don't want to spend 50% of my time debugging, I'd rather spend 10% debugging and the rest writing new code. What are some techniques I can try to help me reach this goal?
You're asking for the Holy Grail of software engineering, and no one has "the" answer to this question yet. What is essential is that you track the types of errors that you're making and then do an analysis of those errors to determine if there is a common trend. Root cause analysis is the formal name for this type of introspection, and there is plenty of material on the web regarding it. Professionals use a bug tracking system so that they can (1) know what needs to be fixed, but also (2) analyze what had to be fixed after-the-fact. You don't need to be so formal -- just keeping a tally in a notebook may be fine for you. Design Stage Defects If you find that most of your errors come from a misunderstanding of the problem statement, or if you keep finding you've chosen the wrong algorithm or path to follow in solving your problems, you have problems in the design stage. It would behoove you to take more time at the beginning of the project and write out exactly what needs to be done and how it should do it. Review this work carefully and revisit the original problem and determine if you really are tackling it in the right way. An extra hour or three at the start may save you many hours down the road. Coding Errors If your design is solid, but you're constantly fighting the language that you're coding with, get yourself some tools which will analyze your code for you and warn you early and often that you're making mistakes. If you're programming in C, turn on all compiler warnings, use a semantic checker like lint , and use a tool like valgrind to catch common dynamic-memory related issues. If you're programming Perl, turn on strict and warnings and heed what it says. No matter which language you're using, there probably exist many tools out there to help catch common mistakes long before you reach the debugging stage. Integration Stage Defects As you develop your code following good modularity practices, you have to begin gluing the separate pieces together. For example, different sections of your code may have to do with user input, database interaction, data display, algorithms/logic, and each of these are built relatively independent of one another (that is, you tend to concentrate on the section at hand rather than worrying about integration with everything else). Here is where test driven development (TDD) comes in very handy. Each module of your code can have tests which verify that they work according to how they were designed. These tests should either be written first or very early in the process so that you can have a set of "helpers" to keep you honest. When you begin making everything work together, and you find that you're having to change how this or that is implemented or interacts with another sub-system, you can fall back on your tests to make sure that what you've done to make it all work together doesn't break the correctness of the code. And So On... Pick up some books on software engineering and practical coding techniques, and you'll learn many different ways of making development less chaotic and more reliable. You'll also find that just plain old experience -- earn a degree from the school of hard knocks -- will get you into shape as well. What almost everything boils down to is that a little time and work upfront pays off in huge dividends later in the development/release process. The fact that you've noticed these issues so early in your career speaks well for your future, and I wish you the best of luck.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/93302", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/31284/" ] }
93,322
Is it possible to do the following in C# (or in any other language)? I am fetching data from a database. At run time I can compute the number of columns and data types of the columns fetched. Next I want to "generate" a class with these data types as fields. I also want to store all the records that I fetched in a collection. The problem is that I want to do both step 1 and 2 at runtime Is this possible? I am using C# currently but I can shift to something else if i need to.
Use CodeDom. Here's something to get started using System; using System.Collections.Generic; using System.Linq; using Microsoft.CSharp; using System.CodeDom.Compiler; using System.CodeDom; namespace Test { class Program { static void Main(string[] args) { string className = "BlogPost"; var props = new Dictionary<string, Type>() { { "Title", typeof(string) }, { "Text", typeof(string) }, { "Tags", typeof(string[]) } }; createType(className, props); } static void createType(string name, IDictionary<string, Type> props) { var csc = new CSharpCodeProvider(new Dictionary<string, string>() { { "CompilerVersion", "v4.0" } }); var parameters = new CompilerParameters(new[] { "mscorlib.dll", "System.Core.dll"}, "Test.Dynamic.dll", false); parameters.GenerateExecutable = false; var compileUnit = new CodeCompileUnit(); var ns = new CodeNamespace("Test.Dynamic"); compileUnit.Namespaces.Add(ns); ns.Imports.Add(new CodeNamespaceImport("System")); var classType = new CodeTypeDeclaration(name); classType.Attributes = MemberAttributes.Public; ns.Types.Add(classType); foreach (var prop in props) { var fieldName = "_" + prop.Key; var field = new CodeMemberField(prop.Value, fieldName); classType.Members.Add(field); var property = new CodeMemberProperty(); property.Attributes = MemberAttributes.Public | MemberAttributes.Final; property.Type = new CodeTypeReference(prop.Value); property.Name = prop.Key; property.GetStatements.Add(new CodeMethodReturnStatement(new CodeFieldReferenceExpression(new CodeThisReferenceExpression(), fieldName))); property.SetStatements.Add(new CodeAssignStatement(new CodeFieldReferenceExpression(new CodeThisReferenceExpression(), fieldName), new CodePropertySetValueReferenceExpression())); classType.Members.Add(property); } var results = csc.CompileAssemblyFromDom(parameters,compileUnit); results.Errors.Cast<CompilerError>().ToList().ForEach(error => Console.WriteLine(error.ErrorText)); } } } It creates an assembly 'Test.Dynamic.dll' with this class in it namespace Test.Dynamic { public class BlogPost { private string _Title; private string _Text; private string[] _Tags; public string Title { get { return this._Title; } set { this._Title = value; } } public string Text { get { return this._Text; } set { this._Text = value; } } public string[] Tags { get { return this._Tags; } set { this._Tags = value; } } } } You could also use dynamic features of C# DynamicEntity class, no need to create anything at runtime public class DynamicEntity : DynamicObject { private IDictionary<string, object> _values; public DynamicEntity(IDictionary<string, object> values) { _values = values; } public override IEnumerable<string> GetDynamicMemberNames() { return _values.Keys; } public override bool TryGetMember(GetMemberBinder binder, out object result) { if (_values.ContainsKey(binder.Name)) { result = _values[binder.Name]; return true; } result = null; return false; } } And use it like this var values = new Dictionary<string, object>(); values.Add("Title", "Hello World!"); values.Add("Text", "My first post"); values.Add("Tags", new[] { "hello", "world" }); var post = new DynamicEntity(values); dynamic dynPost = post; var text = dynPost.Text;
{ "source": [ "https://softwareengineering.stackexchange.com/questions/93322", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/22098/" ] }
93,670
While Sun's Java code convention suggests to put line break before the operator many other guidelines disagree with it. I do not see any obvious pros and cons, so are there advantages of using one of these styles over another? String longVarName = a + b + c + d + e + f; vs String longVarName = a + b + c + d + e + f;
I can imagine readability being an argument result = longidentifier + short - alittlelonger - c; versus result = longidentifier + short - alittlelonger - c; In the second example the operators are nicely lined up and you can easily see with which sign the variable enters into the equation. I think this also makes sense for binary operators, but with bracing etc., you should just do whatever is clearer.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/93670", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/10981/" ] }
93,684
I am a junior software developer and I have been researching some of the practices in the industry to make myself better. I have been looking at unit testing briefly and I cannot see how the extra time writing a ton of unit tests is going to make my code better. To put things into perspective: The projects I work on are small I am the only developer on the said project (usually) The projects are all bespoke applications The thing I don't get the most is, how can a unit test tell me whether my calculate price function (which can depend on things like the day of the week and bank holidays etc, so assume 20-40 lines for that function) is correct? Would it not be quicker for me to write all the code and the sit through a debugging session to test every eventually of the code? Any examples that are forms-based would be appreciated (examples I have seen from the MSDN videos are all MVC and a waste of time IMHO).
The thing I don't get the most is, how can a unit test tell me whether my calculate price function (which can depend on things like the day of the week and bank holidays etc, so assume 20-40 lines for that function) is correct? Would it not be quicker for me to write all the code and the sit through a debugging session to test every eventually of the code? Let's make this the example. If you sit down and write that (let's call it) 30-line method, you're going to try to think of all the possibilities, write them into the method, then debug it, again trying to take all possibilities into consideration. If you've gone as far as checking for days of the week and got to bank holidays, and find a bug, then you need to change the method, and it would be easy to change it in such a way that it now works correctly for bank holidays, but not for weekends - but since you already checked for weekends and it worked, you might forget to re-test. That's just one scenario for bugs to creep into your product with your approach. Another problem is that it can be easy to add code for conditions that never occur in fact, out of excessive caution. Any code you don't need adds complexity to your project, and complexity makes debugging and other maintenance tasks harder. How does TDD protect you against these issues? You start by writing the simplest case, and the simplest code that will pass it. Say your default price is $7. I'll write this Java-ish, 'cause it's handy, but the approach works in any language: public void testDefaultPrice() throws Exception { assertEquals(7, subject.getPrice()); } public int getPrice() { return 7; } Simple as can be, right? Incomplete, but correct as far as we've gone. Now we'll say the weekend price needs to be $9. But before we can get there, we need to know what days constitute weekend. public void testWeekend() throws Exception { assertTrue(Schedule.isWeekend(Weekday.Sunday)); } public boolean isWeekend(Weekday.Day day) { return true; } So far so good - and still very incomplete. public void testWeekend() throws Exception { assertTrue(Schedule.isWeekend(Weekday.Sunday)); assertTrue(Schedule.isWeekend(Weekday.Saturday)); assertFalse(Schedule.isWeekend(Weekday.Monday)); } public boolean isWeekend(Weekday.Day day) { return day == Weekday.Sunday || day == Weekday.Saturday; } Add as many assertions as you need to be confident you have driven the method to correctness. Now back to our original class: public void testDefaultPrice() throws Exception { assertEquals(7, subject.getPrice()); } public void testWeekendPrice() throws Exception { subject.setWeekday(Weekday.Sunday); assertEquals(9, subject.getPrice()); } public int getPrice() { if (Schedule.isWeekend(day)) return 9; return 7; } And so it goes. Please notice, also, how we are test-driving the design of our code here, and how much better it is because of it. With your approach, most programmers would have built the weekend-testing code into the body of getPrice() , but that's the wrong place for it. Plenty of code might want to know whether a day is on the weekend; this way you have it in a single, well-tested place. This promotes reuse and enhances maintainability.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/93684", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/25029/" ] }
93,727
Many of my Java books are 5 - 10 years old. Does it still help to read them, or should I use something within 2 years.
If the books is about the language itself , lose it and get a new one. If the book is about programming as a subject (art, subject, discipline, techniques, whatever), then it definitely is worth reading. The best programming books I have were written over 10 years ago.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/93727", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/29032/" ] }
93,806
I go to a university where students are allowed to make their semester schedule based on the information about the subjects they are going to take, that is, the hours that the courses are available, the professors, and the remaining room for other people. Making these schedules by hand was a very difficult/boring task. I wrote a pretty nifty Python program that automates this process. You pick the codes for the subject you're going to take and filter out the professors you don't want. Then the program outputs all the possibilities there are if there aren't time conflicts. This program helped a lot of students. The time to make a schedule reduced from 2 days to less than 30 seconds! Now here begin the problems. My family and all the people that used the program tell me to patent the program before someone steals the idea (that could happen in my country). But I question that myself. Is it necessary to patent a web scraper mixed with a backtracking engine? It was difficult to make the program because I didn't know a lot of things, but now that I have finished, I feel that it would be very stupid/immature to patent such a thing. But on the other hand, I don't want someone else to get the credit for it. What do you think?
In your case, I have a strong vote "against". Computer-aided schedule making is a problem as old as computers, and one of favored subjects of thesis given out to students to solve. Chances are more than good that there is a prior art on your patent. The target audience, as you say, are students. Piracy is rampart in this customer base, so no matter what -legal- protection you apply, you'd better implement some awesome DRM (...on a Python script?!) Software patents are recognized almost nowhere outside USA. There is nothing against a foreign company picking up your patent and selling it locally. And patent application requires pretty detailed description of the mechanism in question, and is totally public, meaning you practically hand them the instructions. Considering the costs of a patent application (and good chance of having it rejected), the chance of return on investment is slim. Software patents are universally considered evil by the IT people. You will lose a lot of professional respect in developer community for patenting software. You'd be hard-pressed to come up with a business model to have people pay reasonable money for a piece of software they use for 30s twice a year. edit: Let me add a solution to most of your problems: Software as service. Make a web app that performs your task; make it accessible through micropayments. Piracy problem vanishes, it can't be trivially copied so someone would need to "reinvent" it to bypass your (lack of) patent protection, small "per use" fee synergizes with the "30 seconds twice a year" usage pattern, and you're skipping a lot of distribution headaches.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/93806", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/31632/" ] }
93,826
Possible Duplicate: In plain English, what is recursion? What is the best way to explain " Recursion " to 8 years old kid? I tried with the Fibonacci Series but i failed.
Well, recursion is actually pretty simple to grasp for kids. Don't try it with mathematics or whatever the other people here are suggesting. They are too young to understand it. It's too abstract and boring for them. Instead: Show them a picture of a painter who is painting a picture of painter who is painting a picture ... Something like this: There are probably even better examples to be found on the web. And trust me: They'll understand it in no time. Regardless of the question, I think any child should own a book with paintings of M. C. Escher. It'll be good for their development and creativity. Edit: Lately I have realized that you can explain recursion to children by using food, too. Take broccoli or cauliflower for example: These are fractal vegetables. Tear them apart and you'll find that the smaller parts will turn out to look like the big whole you once had, just smaller. This has the advantage that you can teach your child recursion while eating. Don't laugh! Children will remember it better, because it's related to their meal (and thus important to their conciousness) and they can comprehend it. A German term for "comprehend" is "begreifen", which literally means "to touch something in order to understand it". Try it yourself. It's far easier to remember something you have once touched.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/93826", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/16620/" ] }
93,872
I work on a small team, around 10 devs. We have no coding standards at all. There are certain things that have become the norm but some ways of doing things are completely disparate. My big one is indentation. Some use tabs, some use spaces, some use a different number of spaces, which creates a huge problem. I often end up with conflicts when I merge because someone used their IDE to auto format and they use a different character to indent than I do. I don't care which we use I just want us all to use the same one. Or else I'll open a file and some lines have curly brackets on the same line as the condition while others have them on the next line. Again, I don't mind which one so long as they are all the same. I've brought up the issue of standards to my direct manager, one on one and in group meetings, and he is not overly concerned about it (there are several others who share the same view as myself). I brought up my specific concern about indentation characters and he thought a better solution would be to, "create some kind of script that could convert all that when we push/pull from the repo." I suspect that he doesn't want to change and this solution seems overly complicated and prone to maintenance issues down the road (also, this addresses only one manifestation of a larger issue). Have any of you run into a similar situation at work? If so, how did you handle it? What would be some good points to help sell my boss on standards? Would starting a grass roots movement to create coding standards, among those of us who are interested, be a good idea? Am I being too particular, should I just let it go? Thank you all for your time. Note: Thanks everyone for the great feedback so far! To be clear, I don't want to dictate One Style To Rule Them All. I'm willing to concede my preferred way of doing something in favor of what suits everyone the best. I want consistency and I want this to be a democracy. I want it to be a group decision that everyone agrees on. True, not everyone will get their way, but I'm hoping that everyone will be mature enough to compromise for the betterment of the group. Note 2: Some people are getting caught up in the two examples I gave above. I'm more after the heart of the matter. It manifests itself with many examples: naming conventions, huge functions that should be broken up, should something go in a util or service, should something be a constant or injected, should we all use different versions of a dependency or the same, should an interface be used for this case, how should unit tests be set up, what should be unit tested, (Java specific) should we use annotations or external config. I could go on.
A co-worker and I had a similar problem on our team when we first joined (I joined the team first, he joined about a year later). There were no real code standards. We're a MS shop, and even the MS coding standards weren't used. We decided to lead by example. We sat down together and drafted a document that had all of our standards: IDE standards, naming standards, everything we could think of. Then he and I agreed to follow the standard explicitly. I sent an email to the team (from the both of us), and notified them that we had found a lack and we were going to address the lack with our document. We invited critique, ideas, comments, opinions. We received very little in the manner of feedback and only a little push back. He and I immediately started using the standards, and as we brought junior developers onto our projects we introduced them to the standard and they began using it. We have a few leads who were reluctant at first but have slowly begun using the standard for a great many things. We found that many of the junior developers had recognized the same problem but were waiting for someone else to step up and make the decision. Once there was a plan to follow, many were eager to adopt it. The tough nuts to crack are the ones who refuse to code by any other means, and they'll usually code themselves out of the picture in the long run. If you want standards, blaze the path. Come up with a suggested list of standards you think would benefit your team. Submit it to peers and leads for feedback. I'm sure other people on your team have similar ideas, but they probably lack the gumption to do anything about it. As Gandhi said, "You must be the change you wish to see in the world."
{ "source": [ "https://softwareengineering.stackexchange.com/questions/93872", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/31508/" ] }
93,983
I must be missing something. The cost of employing a programmer in my area is $50 to $100 an hour. A top end machine is only $3,000, so the cost of buying a truly great computer every three years comes to $0.50/hour. ($3000/(150 wks * 40 hours)) Do you need a top-end machine? No, the $3000 here is to represent the most that could possibly be spent not the amount that I would expect. That's roughly the cost of a top-end iMac or MacBook (17 inch). So suppose you can save $2000 every three years by buying cheaper computers, and your average developer is making $60. (These are the most charitable numbers that I can offer the bean-counters. If you only save $1000, or $750, it only strengthens my case.) If those cheaper computers only cost you 10 minutes of productivity a day. (Not at all a stretch, I'm sure that my machine costs me more than that.) then over 3 years the 125 lost hours would add up to a loss of $7500. A loss of 1 minute a day ($750) would give a net gain of $1250, which would hardly offset the cost of poor morale. Is this a case of "penny-wise and pound-foolish" or have I oversimplified the question? Why isn't there universal agreement (even in the 'enterprise') that software developers should have great hardware? Edit: I should clarify that I'm not talking about a desire for screaming fast performance that would make my friends envious, and/or a SSD. I'm talking about machines with too little RAM to handle their regular workload, which leads to freezing, rebooting, and (no exaggeration) approximately 20 minutes to boot and open the typical applications on a normal Monday. (I don't shut down except for weekends.) I'm actually slated to get a new machine soon, and it will improve things somewhat. (I'll be going from 2GB to 3GB RAM, here in 2011.) But since the new machine is mediocre by current standards, it is reasonable to expect that it will also be unacceptable before its retirement date. Wait! before you answer or comment: $3000 doesn't matter. If the machine you want costs less than that, that's all the more reason that it should have been purchased. I'm not asking for more frequent upgrades. Just better hardware on the same schedule. So there is no hidden cost of installation, etc. Please don't discuss the difference between bleeding edge hardware and very good hardware. I'm lobbying for very good hardware, as in a machine that is, at worst, one of the best machines made three years ago. $50 - $100 / hour is an estimate of employment cost - not salary. If you work as a contractor it would be the billing rate the contracting agency uses which includes their expenses and profit, the employers Social Sec. contribution, employers health care contribution etc. Please don't comment on this number unless you know it to be unrealistic. Make sure you are providing new content. Read all answers before providing another one.
Many companies are certifiably insane around this. Seriously. If you asked 10,000 tech mangers, "Let's say you paid Danica Patrick $100,000,000. Do you think she could win the Indianapolis 500 by riding a bicycle?", I'm sure not one of them would say, "Yes." And yet a good percentage of these same managers seem to think that highly-paid software developers ought to be just as productive with crappy tools and working conditions as they are with good ones - because, of course, those lazy, feckless programmers are getting paid lots of money and ought to be able to pedal that bicycle faster. Now, what exactly good tools and working conditions consist of depends on the job to be done. People who code the Linux kernel need different kinds of hardware than web site designers. But if the company can afford it, it's crazy not to get people what they need to be as productive as possible. One company I worked for had a 9 GB source code base, primarily in C, and the thing we most needed were fast builds. Unfortunately, we were mostly working with hardware that had been mediocre five years before, so people were understandably reluctant to build much other than what they were working on at the moment, and that took its toll via low productivity, quality problems, and broken builds. The company had money to upgrade the hardware, but was strangely stingy about it. They went out of business last summer after blowing through over $100 million because their two biggest clients dropped them after repeatedly missed deadlines. We were asked one time to suggest ways to improve productivity; I presented the same kind of cost-benefit analysis the OP did. It was rejected because management said, "This must be wrong - we can't possibly be that stupid", but the numbers didn't lie. Another company I worked for had fine computers for the programmers, but insisted everybody work at little tiny desks in a big crowded bullpen with no partitions. That was a problem because a lot of us were working with delicate prototype hardware. There was little room to put it on our desks, and people would walk by, brush it, and knock it on the floor. They also blew through $47 million in VC money and had nothing to show for it. I'm not saying bad tools and working conditions alone killed those companies. But I am saying paying somebody a lot of money and then expecting them to be productive with bad tools and working conditions is a "canary in the coal mine" for a basically irrational approach to business that's likely to end in tears. In my experience, the single biggest productivity killer for programmers is getting distracted. For people like me who work mainly with compiled languages, a huge temptation for that is slow builds. When I hit the "build and run" button, if I know I'll be testing in five seconds, I can zone out. If I know it will be five minutes, I can set myself a timer and do something else, and when the timer goes off I can start testing. But somewhere in the middle is the evil ditch of boredom-leading-to-time-wasting-activities, like reading blogs and P.SE. At the rates I charge as a consultant, it's worth it for me to throw money at hardware with prodigious specs to keep me out of that ditch. And I daresay it would be worth it for a lot of companies, too. It's just human nature, and I find it much more useful to accept and adapt to normal weaknesses common to all primates than to expect superhuman self-control.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/93983", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/2329/" ] }
94,007
We've all done it, we've labelled some code (often stuff we've inherited) as "legacy"? But it's still used in the production systems - so is it really legacy? And what makes it legacy? Should we shy away from this unwarranted labelling of perfectly functioning code; where the labelling is a pure convinience which allows us to push through new stuff and keep upper management nice and happy? Summary of answers Looking through the answers I see four general themes. Here is as I see the breakdown: Any code that has been delivered: 6 Dead systems: 2.5 No unit tests: 2 Developers are not around: 1.5
I'm rather partial to Wikipedia's summary myself: A legacy system is an old method, technology, computer system, or application program that continues to be used, typically because it still functions for the users' needs, even though newer technology or more efficient methods of performing a task are now available. A lot of what other people are describing in their answers are reasons why code becomes "legacy". But the essential question itself is this: But it's still used in the production systems - so is it really legacy? And what makes it legacy? The fact that it is still used in production is precisely what makes it legacy . If the code does not work properly, or is no longer used in production, then that code is "broken" or "retired", respectively. Legacy means that it is still in use and works fine, but incorporates designs or techniques that are no longer in common use. Any code or system that you either (a) would like to upgrade/update, but can't, or (b) are still in the middle of upgrading, is a legacy system. This doesn't mean refactoring or general code cleanup, it means significant changes to the design, possibly using a new framework or even a new platform. There are any number of reasons why systems or code might become legacy: Lack of regular maintenance or software rot . Clearly if the application is not maintained regularly, it will not keep pace with major changes in the software world. This might be due to simple neglect or it might be a deliberate choices based on business priorities or budgetary constraints. Lack of testing. Another answer references a popular author's hyperbolic claim of any code not covered by tests being legacy code. This really isn't an accurate definition but it is a possible root cause; without good tests (automated or manual), developers become timid and afraid to make major changes because they worry about breaking something, thus leading the "software rot" above. Rev-locking, an often-overlooked factor which is particularly insidious in projects using large open-source libraries or frameworks (although I've seen it happen with commercial tools as well). Often there will be major customization done to the framework/library, making an upgrade prohibitively difficult or expensive. Thus the system becomes legacy because it runs on an older (and possibly no-longer-supported) platform. The source code is no longer available, meaning that the system can only ever be added to, never changed. Since these systems have to be rewritten in order to upgrade - as opposed to incrementally/iteratively revised - many companies won't bother. Anything that slows or stops updates to a code base can lead to that code base becoming legacy. Now the separate, unstated-but-implied question is, what's wrong with legacy code? It's often used as a pejorative term, hence the question: Should we shy away from this unwarranted labelling of perfectly functioning code? And the answer is no, we shouldn't; the labeling is warranted and the term itself clearly implies functioning code. The point is not that it's function, but how it's functioning. In some cases there's nothing wrong with legacy code. It's not a bad word. Legacy code/systems are not Evil. They've just collected some dust - sometimes a little, sometimes a lot. Legacy becomes obsolete when the system can no longer serve (all of) the client's needs. That label is one that we need to be careful of. Otherwise, it's simply a cost/benefit equation; if the cost of upgrading would be lower than the cost of its benefits (including lower future maintenance costs) then upgrade, otherwise, leave it alone. No need to spit out the word "legacy" in the same tone you normally reserve for "tax audit". It's a perfectly OK situation to be in.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/94007", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/10505/" ] }
94,126
I have a difficult client. Every bill is argued and debated over, and each email is parsed with a lawyers eye (because he's a lawyer), looking for a way to avoid paying for something. No amount of generosity on my part is ever reciprocated. The client currently has 60% of his bills unpaid (these are invoices he signed off on), and it is a substantial amount of money. How it got to this stage, admittedly is a product of my own naïveté. Since the client hosts his own code, I can't shut off the hosting and demand payment. Is it legal to install a remote "Kill Switch" to shut down the customers code unless bills are paid?
At 60% unpaid bills, the very least you need to do is to stop all further maintenance and support of your code for this customer until they have paid in full. Also realise that you aren't doing this (stopping maintenance and support) to punish the customer - it's simply common sense self-preservation for you and your company. If all your clients would string you along like this you would very quickly end up with a serious cash-flow problem and go bankrupt. You cannot afford to do business like this. For anything else, follow the advice given by other posters: consult a lawyer!
{ "source": [ "https://softwareengineering.stackexchange.com/questions/94126", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/31722/" ] }
94,164
Often I hear PMs (Project Managers) talk about feature and function. And I'm just so puzzled to differentiate them. Sometimes I think of a feature to be equivalent to a user story. Something like "As a user, Bob should be able to see a list of his payments", and they call it a feature. Sometimes it gets as big as a subsystem, something like "the ability to send SMS via web application". Function on the other hand sometimes gets as small as a task, "implementing digit grouping for number inputs", while there are cases when it gets as big as a whole CRUD operation. My question is, how can we differentiate feature from function?
Features are what sales people sell. Functions are what programmers develop.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/94164", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/31418/" ] }
94,196
I want to know if Objective-C is an interpreted or a compiled language.
It is neither. Objective-C is a programming language. A programming language is an abstract concept. A programming language is a set of mathematical rules and definitions. Programming languages aren't compiled or interpreted, they just are . Compilation and interpretation aren't properties of a programming language, they are properties of, well, a compiler or an interpreter (duh). Every language can be implemented by a compiler and an interpreter, and most languages have both compiled and interpreted implementations. In fact, the majority of modern language implementations utilize both an interpreter and a compiler in the same execution engine for maximum performance. For Objective-C specifically, I know of three implementations: gobjc, clang and oscompiler, but a quick Google search turned up two more. Of those five implementations, three are compilers and two are interpreters.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/94196", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/17948/" ] }
94,321
I'm currently working for small company that has few technically complicated products. I'm the one and only developer for one of them. About a year ago, I got the legacy version of the product and started "supporting" it. Customer only talks about new feature, business value and others of that kind. The problem is, though the code is in C#, it's quite procedural. There are no abstractions, classes are only used where Visual Studio requires them - Forms, for instance. The implementations of these classes are really awful and the code is really hard to maintain. For all these year I spend my own time for refactoring. In the latest version, there are pretty abstractions and such. I had to reimplement a number of components from scratch and I really feel that adding new feature or changing behavior to these components is MUCH easier than to others. The problem is, I spend my own time. I really like the results, but I don't like working 12 hours per day. Have you ever been to similar situation? What should I try? I've already tried discussing it, but still no success. I'm just scared of the moment when we decide implementing the new feature that requires a lot of changes to the legacy code. That just could be shocking to the customer: why do you need 8 hours for changing these icons? The customer just doesn't care that there are 500 places in the code that I need to change. And I should also find all these 500 places first. Any ideas?
Step 1: Stop working unpaid overtime. You have already trained your customer and manager for a year to believe that the current rate of development is what should be expected. This is part of the reason why they do not understand why a "simple" thing could take a full day to do. You don't need to hold them hostage and attempt to hurt the project. But you need to explain that their expectations are too high and you either need another developer or more time before the deadline. Make a point to specifically mention to your manager that you have been working unpaid overtime and are planning on not doing that as much. Even if you cut it down to 9 hour days, the difference will be noticed. If your manager asks you why you are not getting your work done, you can point to the fact that you made a point to warn him this would be the case. Step 2: Make notes Just because you don't have time to do the work, doesn't mean you can't make it easier when the work (hopefully) gets done. Keep track of ideas you have for fixing the code and bring them up at meetings so others are aware of your concerns. Eventually you will hit a slow patch or people will start to understand your concerns have value. When this time does come, you will already have some basic ideas about what to do instead of coming at it dry because you haven't looked at a section of code in a while.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/94321", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/31856/" ] }
94,429
So, I watched as my colleague complained a bit about a project he has inherited from someone who is, shall we say, not very experienced as a programmer (intern left to his own devices on a project). At one point there was duplicate code, about 7-8 lines of code duplicated (so 14-16 lines total) in a method of say 70-80 lines of code. We were discussing the code, and he spotted a way to refactor this to remove the duplication by simply altering the structure a bit. I said great, and then we should also move the code out to a separate method so the large method gets a little more readable. He said 'no, I would never create a method for just 7-8 lines of code'. Performance issues aside, what are your input on this? Do you lean against using more methods (which in c# pads code with about 3 lines) or larger methods, when that particular code will probably not be used anywhere else? So it is purely a readability issue, not a code-reuse one. Cheers :)
The LoC in a method is a completely pointless measure. The important things are separation of concerns and code duplication. A method should only do one thing, and that one thing should be expressed in its name. Other things should be left to other methods. The problems arising from code duplication cannot be overestimated (in other words, they are always bigger than you think). And that alone will justify even methods with 1 line of code. On the other hand, a method that populates a big form with details from a big data object can easily have 50 or more lines of code, but without a single conditional statement. Why would you want to break that down into smaller pieces if there is no functional requirement (or code duplication)?
{ "source": [ "https://softwareengineering.stackexchange.com/questions/94429", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/21523/" ] }
94,551
I decided to ask this question here instead of on stack overflow because it is rather subjective. In C#, typically I see generic types with very poor names. Specifically, "T" is commonly used but is not a meaningful name by itself. For example: class Fruit<T> { T fruit; } While this is the typical approach, would anyone recommend against this? And if so, what would a reasonable naming convention be for generic types in the context of C# for generic functions and classes? In my previous example, let's assume that generic type T must always be a type of fruit, such as Apple or Orange . The type T needs to make it obvious that it's a type of fruit, so maybe a better name would be FruitType , so we end up with: class Fruit<FruitType> { FruitType fruit; } This is just to give you guys an idea of what I'm looking for. What's an acceptable "rule of thumb" for this issue?
It is indeed subjective... ish . As some folks find i is perfectly valid for a for loop variable, some think T is perfectly valid for a type place-holder in a generic class. I personally espouse this approach, it's a common convention and people generally know what you mean. Where the type is meaningful I'd use a meaningful name, but generally start it with T. I recently developed a generic dictionary class (don't ask) and the declaration was public class Dictionary<TKey, TValue> However, for something like a Tuple, where the types are essentially meaningless, I consider the following perfectly acceptable. public class Tuple<T1, T2, T3>
{ "source": [ "https://softwareengineering.stackexchange.com/questions/94551", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/31950/" ] }
94,560
I'm running Coldfusion 8 and SQL server 2008. I've been building serveral forms that insert data into the database from external users, we have a custom built security module built by the guy who I've taken his job. 1) How can we test our HTML forms to ensure that we're protected from SQL injection attacks? 2) How do I secure CFqueries in CFC's? 3) What are some best practices in terms of SQL & Coldfusion for security? -- A lot I know!
It is indeed subjective... ish . As some folks find i is perfectly valid for a for loop variable, some think T is perfectly valid for a type place-holder in a generic class. I personally espouse this approach, it's a common convention and people generally know what you mean. Where the type is meaningful I'd use a meaningful name, but generally start it with T. I recently developed a generic dictionary class (don't ask) and the declaration was public class Dictionary<TKey, TValue> However, for something like a Tuple, where the types are essentially meaningless, I consider the following perfectly acceptable. public class Tuple<T1, T2, T3>
{ "source": [ "https://softwareengineering.stackexchange.com/questions/94560", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/29398/" ] }
94,563
I understand an "idiom" to be a common operation or pattern that in a particular language is not simplified by core language syntax, such as integer increment: i = i + 1; In C++, this idiom is simplified by an operator: ++i; However, when someone uses the term "idiomatic", I am not sure how to understand it. What makes a piece of code "idiomatic"?
An idiomatic way of writing some code is when you write it in a very specific way because of language-specific idioms that other languages don't have. For example, in C++, exploiting the RAII idiom leads to ways of writing C++ code managing resources that are idiomatic. Another example would be using list comprehensions in Python to generate lists. It's idiomatic because you would have used list comprehension in idiomatic Python but you could have done the same using a generator function in any other language or even in Python. Often, someone trying a new language without using the idioms specific to this language will not write idiomatic code.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/94563", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/31950/" ] }
94,716
Recently in an interview, one of the questions was 'Why do we use MVC?’ I just answered that it is much closer to how, many of the real world systems are! Explained the benefits it has when it comes to Maintainability, Scalability etc. But they were not convinced and finally told me that MVC is used mainly as it 'enables easy Unit Testing'. While I know theirs is a valid point, I still doubt if it is the major reason because (i) even if I decide not to write Unit Testcases, MVC is a probable choice (ii) Many GUI systems where Unit Testcases are there does not follow MVC. So the question is 'Is Unit Testing the primary objective of MVC Pattern?' EDIT: I assume that they might be mentioning ease of Test Driven Development/writing NUnit Testcases. This is because we canwrite testcases for the Model (Provided the View is exactly reflecting Model's state changes)-please correct me if I am wrong.
The primary objective would be "separation of concerns", as the model, the view and the controller all have distinct responsibilities. The author of the original Xerox PARC paper states that: The essential purpose of MVC is to bridge the gap between the human user's mental model and the digital model that exists in the computer. If unit-testing were the primary objective, one would be able to easily unit-test views. A look at the landscape of unit-testing projects/frameworks would reveal that it is quite contrary to the claim made. One would typically be using integration and functional tests to test the view.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/94716", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/27180/" ] }
94,732
Imagine a situation where several people are sent to an interview and only one has to pass. His knowledge is enough for the project. However, the customer asked for the several candidates so he can choose the best one. And the other people from your company are already engaged in other projects and should not get the job. The question is: How to show that the other candidates are worse and at the same time not make them look stupid? UPD I've got the point but I need to clarify the primary idea. I wanted to know how another candidate [that should not pass the interview] has to behave at it? I've left the modified question [1st one] because several answers correspond to it.
If you haven't done so yet, tell the customer about the situation up front . If he still insists on interviewing the other developers, let him do it, in a fair manner (i.e. all developers answer honestly and to their best ability ), and let him compile her order of preference. Then let him know the cost (in time and cash) of transferring each developer to this project (including a replacement in the other project), e.g.: Bob (engaged in the Foo project - estimated replacement cost $50K and 4 weeks) Jane (engaged in the Bar project - estimated replacement cost $30K and 6 weeks) Jack (can start immediately) Nat (engaged in the Groo project - estimated replacement cost $80K and 10 weeks) Mary (can start immediately) As long as he is willing to pay the associated price, he can choose whichever developer she prefers.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/94732", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/32014/" ] }
94,754
I work on a huge project (more like a tangled up combination of dozens of mini projects that can't be easily separated due to poor dependency management, but that's a different discussion) in Java using eclipse. We've already turned off a number of warnings from compiler settings and the project still has over 10,000 warnings. I'm a big proponent for trying to address all warnings, fix all of them if possible, and for those that are looked into and deemed safe, suppress them. (Same goes for my religious obsession with marking all implemented/overriden method as @Override). My biggest argument is that generally warnings help you find potential bugs during compile time. Maybe out 99 of 100 times, the warnings are insignificant, but I think the head scratching that it saves for the one time that it prevents a major bug, it's all worth it. (My other reason is my apparent OCD with code cleanness). However, a lot of my teammates don't seem to care. I occasionally fix warnings when I stumble across them (but you know it's tricky when you touch code written by a co-worker). Now with literally more warnings than classes, the advantages of warnings are very much minimized, because when warnings are so common-place, nobody will bother looking into all of them. How can I convince my teammates (or the powers that be) that warnings need to be addressed (or suppressed when fully investigated)? Or should I convince myself that I'm crazy? Thanks (P.S. I forgot to mention what finally prompted me to post this question is that I sadly noticed that I'm fixing warnings slower than they are produced)
You can do two things. Point out that warnings are there for a reason. Compiler writers don't put them in because they are mean-spirited. People in our industry are generally helpful. Many warnings are helpful. Gather up a history of spectacular failures arising from ignored warnings. A web search for "Pay attention to compiler warnings" returns some anecdotes. Your obsession with @Override is not an "obsession." That is a Good Thing. Ever misspelled a method name?
{ "source": [ "https://softwareengineering.stackexchange.com/questions/94754", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/-1/" ] }
94,832
I work at a small software company where the owners are also the managers. My concern is that any and all progression in technology is met with utter disdain by management. Some of the comments are as follows: LINQ, nHibernate, and ORM are bad programming practice, we will never use them. The majority of large applications are still written in VB6. The web is just a waste of time, its not meant for applications. Every time a new version of development software is released, I have to listen to the management complain about it for hours. Technologies like WPF, WCF, MVC and Entity are completely ignored. All that said, its not a horrible place to work, the pay is average and its close to home. My concern is that, even though we are technically using the latest version of .NET, we are hardly using the latest technologies, we might as well be using .NET 1. If I decide to move, will this "experience" limit me career wise? I have been here for a few years already. EDIT: I am really grateful for the superb response. I honestly think it might be in my own best interest to make a move.
The longer you stay, the worse it will get (in terms of your being up to date on current technology). Go now.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/94832", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/32043/" ] }
94,843
Our web application has a complex access control system which incorporates role-based and object-level privileges. In the business logic layer, this is implemented by a component that obtains (and caches) all the necessary data with a batch query and computes the user's type and level of access to any object in the system. (A future optimization would be conditional batching based on the data we need for a particular request.) However, the view privilege logic in this component is duplicated elsewhere in database queries. (We need to hide data in listing screens that the user does not have privilege to view.) How can we reduce or eliminate this duplication of logic between the application access control component and our database queries? Two approaches come to mind. I'm sure there are others. Check view privilege in the application for each row that comes back from the server via queries from listing screens. Move more of the access control logic into a stored function that can be called from the queries as well as the application code. Answers should defend the merits of the proposed method over other methods. For example, if my second suggested approach is desirable, why? If you have suggested a third approach, why does it win over both my approaches?
The longer you stay, the worse it will get (in terms of your being up to date on current technology). Go now.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/94843", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/20202/" ] }
94,887
I used to create a lot of abstract classes / methods. Then I started using interfaces. Now I am not sure if interfaces aren't making abstract classes obsolete. You need a fully abstract class? Create an interface instead. You need an abstract class with some implementation in it? Create an interface, create a class. Inherit the class, implement the interface. An additional benefit is that some classes may not need the parent class, but will just implement the interface. So, are abstract classes / methods obsolete?
No. Interfaces cannot provide default implementation, abstract classes and method can. This is especially usefull to avoid code duplication in many cases. This is also a really nice way to reduce sequential coupling. Without abstract method/classes, you cannot implement template method pattern. I suggest you look at this wikipedia article : http://en.wikipedia.org/wiki/Template_method_pattern
{ "source": [ "https://softwareengineering.stackexchange.com/questions/94887", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/21271/" ] }
95,048
I have a general question about databases. We usually use the term collation with databases. I would like to know how its different from character set. I guess collation is a subset of character set. If its true, what is the purpose multiple collation under a character set.
Character sets is a list of symbols. If you compare ASCII to latin1 , with latin1 you will be able to write all american words because latin1 contains all ASCII characters, which are sufficient to write any English word. On the contrary, with ASCII you will not be able to write all words of Western European specific languages, because for instance characters like 'À', 'ë', 'õ', 'Ñ' are missing. Collation is about comparison between characters. It defines a set of rules to compare characters of a character set. In MySQL, collations are often related to one language (e.g. 'latin1_swedish_ci', 'latin1_german1_ci', etc.). When you order a select query, a word starting by 'ö' will be placed between two words starting by 'o' and 'p' in some languages (with some collations). But with another collation, this character may be placed completely at the end, which make the resulting selection different.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/95048", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/22648/" ] }
95,065
Our project head is a genius software architect, a gentle and considerate person in general, a geek by nature and delicate by voice. But, at times, we (my teammates and I) differ in opinions -- especially of software architecture issues, system design issues, UI issues, etc., with our leader. When and how (if ever) should we express the difference in opinions?
Suppose you think your boss is wrong. You have three options do what he says and end up frustrated thinking that you do something stupid - not very good long term tell him he's an idiot - he'll either ignore it or you get communication problems - gets you nothing or hurts you. tell him that you have specific concerns about the ideas he proposes and explain those concerns - any good boss will explain his position and then you can get to a decision that is good for the business. It's quite likely you'll see that his idea is better than yours and you've been ignoring something very important. Always think of the outcome. In most cases you don't want to be right for the sake of being right, you just have to do good job. The third option helps achieve that.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/95065", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/31560/" ] }
95,212
I know that Microsoft has said ASP.NET MVC is not a replacement for WebForms. And some developers say WebForms is faster to develop on than MVC. But I believe speed of coding comes down to comfort level with the technology so I don't want any answers in that vein. Given that ASP.NET MVC gives a developer more control over their application, why isn't WebForms considered obsolete? Alternatively, when should I favor WebForms over MVC for new development?
Webforms vs. MVC seems to be a hot topic right now. Everyone I know touts MVC to be the next great thing. From my slight dabblings in it, it seems ok, but no I don't think it will be the end of webforms. My reasoning, and the reasoning as to why webforms would be chosen over MVC, has more to do with a business perspective rather than what one is better than the other. Time/money are the greatest reasons why webforms would be chosen over MVC. If most of your team knows webforms, and you don't have the time to get them up to speed on MVC, the code that will be produced may not be quality. Learning the basics of MVC then jumping in and doing that complex page that you need to do are very different things. The learning curve is high so you need to factor that into your budget. If you have a large website written all in webforms, you might be more inclined to make any new pages in webforms so that you don't have two very different types of pages in your site. I'm not saying it's an all or nothing approach here, but it does make your code harder to maintain if there is a split of both, especially if not everyone on the team is familiar with MVC. My company recently did three test pages with MVC. We sat down and designed them out. One issue we ran into is that most of our screens have the View and Edit functionality on the same page. We ended up needing more than one form on the page. No biggy, except then we wouldn't use our masterpage. We had to revamp that so that both the webforms pages and MVC pages could use the same masterpage for common look and feel. Now we have an extra layer of nesting. We needed to create a whole new folder structure for these pages so that it followed the proper MVC separation. I felt there were too many files for 3 pages, but that is my personal opinion. In my opinion, you would choose webforms over MVC if you don't have the time/money to invest in updating your site to use MVC. If you do a half arsed approach to this, it won't be any better than the webforms you have now. Worse, you could even be setting this technology up for failure in your company if it's messed up, as upper management might see it as something inferior to what they know.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/95212", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/11107/" ] }
95,267
I am wondering, in tight deadlines, who has time to implement design patterns? It is a lot more work and programming overhead to get it right the first time and within time frame. I know that it has long term advantages, but were you able to implement any design pattern correctly when the client is sitting on your head and pressure is growing. I think once your first version is released and you have plenty of time for the next release, than you can think of improving code quality and manageability with design patterns.
It is a lot more work and programming overhead to get it right the first time and within time frame. That's false. Without the benefit of someone having already thought through the design pattern, and explaining it and documenting it, it would be twice as hard. A design pattern simplifies all the thinking and wondering and deciding. You don't have to invent something new. You can use something that's already well-understood by yourself and others.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/95267", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/6001/" ] }
95,461
I want to convince my partners that we should have a spec and that bugs should get fixed before writing new code. Should I refer to the Joel test ? Do you think that the Joel test is up to date? I think that not having a spec is bad project management. Do you agree with the Joel test? Could you add something? It doesn't mention for instance Open Source.
I think the Joel test is up to date - it's as up to date as much of the other software writing that's "timeless". Doing product development (which includes software development) without a spec is just madness. How do you know where you want to go? There's only one point I'll make about writing a spec (I don't actually think Joel's specs are very good... better than nothing, but not as good as could be). That point is: When writing a spec, say only what the product must do, not how it is to be done. This means you don't dictate implementation details in a spec. That's a design activity and you leave that to the experience and creativity of the designers. [There is only one exception to this rule: Sometimes a particular implementation detail or method is mandated or required, in which case put it in. For example, if the software must be written in PHP and this is not negotiable, then it goes in the spec. There should be very few instances of this.] I might add: not having bug tracking is an act of equal madness. It's simply the most unprofessional and foolish way to operate and will lead to great pain and suffering.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/95461", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/12893/" ] }
95,556
Intel processors (and maybe some others) use the little endian format for storage. I always wonder why someone would want to store the bytes in reverse order. Does this format have any advantages over the big endian format?
There are arguments either way, but one point is that in a little-endian system, the address of a given value in memory, taken as a 32, 16, or 8 bit width, is the same. In other words, if you have in memory a two byte value: 0x00f0 16 0x00f1 0 taking that '16' as a 16-bit value (c 'short' on most 32-bit systems) or as an 8-bit value (generally c 'char') changes only the fetch instruction you use — not the address you fetch from. On a big-endian system, with the above layed out as: 0x00f0 0 0x00f1 16 you would need to increment the pointer and then perform the narrower fetch operation on the new value. So, in short, ‘on little endian systems, casts are a no-op.’
{ "source": [ "https://softwareengineering.stackexchange.com/questions/95556", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/29157/" ] }
95,589
I'm working at a startup software company -- 3 developers, less than 15 employees including the CEO. We deal exclusively with Windows Mobile, the .NET CF, and passing the information gathered from our handheld application to and from our website. My director and I just had a meeting over an urgent project that hasn't begun yet, but it should probably get on a roll soon if we are to meet delivery deadlines for a prospective but very powerful and influential client. He proceeded to explain the project to me as follows: The new project will consist primarily of a .NET CF app that allows the client to conduct multiple soil samples over the course of a single session. The area being sampled by the user should be shown on a map of appropriate size and scale. The map should be dynamically split into grid squares of a size set by the user (in acres). On each grid square, the user will have the ability to take notes and mark individual points via real-time GPS from the on-board receiver. In addition, functionality should exist to help point the user to any grid square given their current position. All information collected should be easily retrieved and read in a very user-friendly manner on the customer's handheld device -- a device which we provide -- at any time. All information collected should be easily sent -- through ActiveSync, wirelessly, or what-have-you -- to our website where it should be viewable/editable in an interface similar to that of the handheld device. With that description in mind I thought things over (albeit rather quickly). With our current staff size, limited budget, and limited resources, I projected that such an endeavor would take roughly 9 to 18 months. Forgive me if I'm off in left field, I'm a recent CS grad and pretty new to the "real" world. However, the project is currently only realized in my director's head, with no design documentation or specs. My question here is, how far off am I, really? Once again, my director -- who has no background in software or IT whatsoever, but is a subject matter expert insofar as soil sampling goes -- put the project at about a 3-month duration. Keep in mind that we're currently using an unsupported SDK for the rest of our GPS and GIS needs, and ESRI products are almost too far out of range for us. Current functionality in our other apps goes give us a leg up, with the ability already in place for us to draw areas, polylines, and plot points on a map. I'm just kind of confused/afraid here, wondering if I'm completely wrong or if I'm right but just without confidence. Any and all advice is appreciated. Thanks!
You may want to approach this backwards: Find out the realistic deadline (e.g., is it OK to deliver this in 5-6 months?). All the #s in this example are somewhat relative to this #. Set milestones for tasks that need to be achieved based on that deadline. E.g. 1 month for req gathering and building up skills/experience on SDKs/hardware/etc.... 2 months for active development. 2 months for testing. 1 month for unforeseen crap resulting from complexity of all this. This is a good granularity for first pass, but then you need tor efined with sub-tasks to something with 2-3 days granularity as far as the plan, and preferably time-meshed, e.g. you should have some initial coding tasks allocated to first month. Then, cull the requirements or feature set if you see that you are missing the milestones. Please note that you need this "coding tasks at the beginning" I listed before, as it will allow you to determine the correctness of your pace and estimates for coding well before 3 months pass. Basically, I'm running this (somewhat contrary to usual project management approach) from the "MUST DELIVER BY" standpoint that you seem to have stressed in your post. E.g. Not delivering by hard deadline is NOT an option.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/95589", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/12302/" ] }
95,637
While most interview questions are focused on current knowledge of a candidate or check his/her skill to solve algorithmic problems I would like to hire a developer who is passionate about programming. What if instead of asking questions like What do you know about technology "X"? I will check check the knowledge that is not directly related to solving software engineering problems but shows how curious you are to IT. For example if I look for a Java developer I can ask who are the most influential people in Java world or show a basic Scala snippet and ask a candidate to interpret the code. I even considered to show photo of Alan Turing and let the interviewee guess who is on the photo. Does this practice make any sense?
All you have to do is ask him to tell you about one of the projects he's worked on that he most enjoyed. You'll find out more about his enthusiasm in the following 60 seconds than you ever could showing him photographs of deceased notables.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/95637", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/10981/" ] }
95,718
Design patterns are good, but complex. Should we use them in small projects? Implementing design patterns needs more sophisticated developers, which in turn raises project costs. On the other hand, they make code neat and clean. Are they necessary for small projects? Update: Should we insist on using design patterns when the team is not efficient at working with them?
Design patterns are good, but complex. This is a false assumption. Design Patterns are meant to strip complexity off existing code and to aid communication between developers. When Design Patterns introduce a higher grade of complexity, then they are misused. Implementing design patters need more sophisticated developers and which in turn raises project costs. This is also a false assumption. Any developer should strive for a minimum amount of complexity in his code. A good developer is well worth his money as the maintenance and extensibility costs decrease. Are they necessary for small projects? They are necessary when they help making the code more expressive and less complex. This is independent of project size. A good developer (tm) will not overengineer and use them when appropriate.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/95718", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/31418/" ] }
95,846
I am a developer for the last 8 years. We used XSLT primarily to transform XML into HTML. We also used it for XML to XML transformation. But we have replacement for everything now. HTML can be comfortably created through programming languages such as ASP.Net. XML can be read and manupulated in any standard highlevel languages. As programming in XSLT is a little complex, any one would prefer working on latest programming languages. Now my question: Will XSLT be a significant choice in future, not considering the fact of maintaining already developed XSLT? Can I recommend new programmers to study XSLT?
There are some important cases where XSLT can be a good choice: ETL ( Extract, Transform, Load ) software can in some cases use XSLT. For example, it can be a good choice when both extracted data and data to load are in an XML format, and where transform may be changed without the need to recompile the application. Some applications which store data in XML use XSLT to present this data in a human-readable format¹. For example, Windows Live Messenger stores the trace of messages as XML, but when you open the history in WLM itself, it shows you a pretty table which in fact is HTML built through XSLT. Some developer-oriented or data-oriented websites may want to give an access to XML if the intent is to use the pages of the website programmatically². It is somehow nicer than to use HTML parsers, especially since HTML code can be changed at any moment. XSLT, when used in websites, allow strict separation between HTML and code-behind, which enables to hire a developer for code-behind and another developer for HTML/CSS stuff. See point 1 in my answer to another question . Will XSLT be a significant choice in future? Well, this is not a significant choice today, and I doubt the usage of XSLT will increase over time. I ignore the reason of that, but many developers don't like XML and hate XSLT. Can you recommend new programmers to study XSLT? Sure! Not only XSLT can be used in some circumstances when other approaches would be more difficult, but also XSLT has a very specific approach that other languages don't have. ¹ By this I mean XML is not really human-readable: if you ask a person who does not work in IT to read XML, he will be horrified. ² I know there are web services. But sometimes it's just easier and more straightforward, on every page, to construct a dynamic object, then to serialize it to XML, then, either transform it to HTML through XSLT or let the bot access the XML directly.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/95846", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/32349/" ] }
95,872
At work, I was given a reasonably-spec'ed machine (dual quad 2GHz, 4GB RAM, 160GB 7200RPM drive, Win7), but it was lacking in a few places (HDD / RAM). The IT staff was OK with me replacing hardware with my own, so now I'm sporting an SSD and an extra 8GB of RAM (so I can run multiple Visual Studios & virtual machines). I realize it was the company's business decision to pick the machines they did, but at the same time faster hardware made me happier and more productive at work. I guess my question is, am I the only one (self-employs not withstanding :) who's willing to spend personal money on their work hardware for a better quality of life, and should I approach it in a different way? Is it possible to retroactively say "I think you should reimburse me for this $500 in hardware because [blah]"?
You spend 40% of your waking time at work. Might as well make it pleasant. I expect my employer to provide the tools I need , but anything I want to make it more pleasant I deem my own responsibility. It's not like I'm donating it to the company. I'm consuming it for my own personal enjoyment and will take it with me when I go. I know developers with their own chairs, keyboards, mice, software tools like editors, even one guy with a 40" monitor. I've brought components from my junk drawer at home that my employer would pay for, just because I wasn't using it anyway and it was faster and more pleasant for me to avoid the procurement process. I bet other people who balk at the idea of bringing their own computer hardware have improved their work environment in other ways at their own expense: pictures, music, kleenex, snacks, plants, books, fans, lamps, eyeglasses, pens, smartphones, etc.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/95872", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/17309/" ] }
95,876
I'm a programmer with a two-hour round trip commute to work each day. I'd like to fill some of that time with audiobooks about software development. Any audiobooks that would help me become a better programmer would be appreciated. I'm thinking that books about design patterns and non-fiction about computing history might be good here, but I'm open to anything. Keeping in mind that I will be listening to this in a car, what are the best audiobooks that I can listen to? EDIT: Many people have also suggested podcasts. This is appreciated, but since podcasts arrive in a constantly arriving stream of data rather than as a finite amount of data, ways to juggle all of these different content streams would also be appreciated. To be more specific to my situation, my commuting vehicle has an MP3 CD player, USB input for MP3 files, and AUX input. I own Android and webOS devices that can be plugged into the AUX input.
Podcasts are good, my usual programming ones are .NET Rocks Hansleminutes StackExchange podcast BayCHI - Great usablity and user interface design podcast. Herding Code IT Conversations - big range of podcasts, I just subscribe to the main feed of all. Pluralcast Polymorpic Podcast Talking Shop downunder The Thirsty Developer Software Engineering radio
{ "source": [ "https://softwareengineering.stackexchange.com/questions/95876", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/7751/" ] }
95,914
Sometimes you want to return multiple values from a function. How is this normally done in Java? One option is to use an array, like this Python snippet that returns a list or tuple: value, success = read_unreliably() if success: print value Another option would be to return a hash/dict, like this JavaScript example: var result = readUnreliably() if (result.success) { alert(value); } One more would be to create a custom object just for this purpose, like this Java example: ReadUnreliablyResult result = readUnreliably() if (result.getSuccess()) { System.out.println(result.getValue()); } Of course you can also just use some global variables to store what you need instead of passing things around, but let's just say that's not an option.
How is this normally done in Java? Painfully. One option is to use an array, like this Python snippet that returns a list or tuple... Another option would be to return a hash/dict, like this JavaScript example... These techniques don't work well in Java; the values would have to be downcast from Object. One more would be to create a custom object just for this purpose, like this Java example... This is most common, but it is tedious to create all those little classes. In C++ it is common to use std::pair but this is not commonly done in Java. With Lombok it is quite easy to create little custom objects: @RequiredArgsConstructor public class X { private final boolean status; private final byte[] data; }
{ "source": [ "https://softwareengineering.stackexchange.com/questions/95914", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/30049/" ] }
95,920
while ( true ) { // what is each time through this loop called? }
I would call it an iteration. I don't know if everyone would.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/95920", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/-1/" ] }
95,946
For a custom software that will likely take a year or more to develop, how would I go about determining what to charge as a consultant? I'm having a hard time coming up with a number, and searches online are providing vastly different numbers (between $55/hr and $300/hr). I don't want to shoot too low because it's going to take me so much time (and I'm deferring my education for this project). I also don't want to shoot too high and get unpleasant looks and demand for justification. FWIW I live in Canada, and have approx. 10 years of development experience. I've read the "take your salary and divide it by 1000" rule of thumb, but the thing is I don't have a salary. Currently I'm just doing fairly small programming tasks for a friend who is starting a marketing company, pricing each task fairly arbitrarily. I don't know what I would make over the course of a year doing it, but it would be incredibly low. My responsibilities for the project would be the architecture, programming, database, server, and UX to some degree. It's going to be a public facing web service so I will also need to put a lot of effort into security and scalability. Any advice or experience?
The best answer I've come across for this question is: "What do you want to earn?" This then has to be moderated by: "What can the customer pay?" You can weave a way somewhere in between. You may also find that if the job is going to take a year, then you could perhaps charge a fixed fee - pick a number - $100K, and say "thats the price". It means you take on the risk if you over-run, but you also walk away with spare $ if you can do it faster. Otherwise, the way of working this out goes something like this: I want to earn $100K per year. There are 52 weeks in a year, with 5 working days = 260 working days. But I want 4 weeks off a year for holidays (deduct 20 days) I better allow 2 weeks off a year for illness (deduct another 10) I need to allow for public holidays (varies by country but most places, about 12 days/yr) So, total actual working days / year = 218. I want to work 7.5 hours / day, so there are 218 * 7.5 = 1635 working hours / year. My $100K / year therefore works out to $100,000 / 1635 = $61.16 / hour. BUT... to this you should then ADD: Allowance for retirement fund, workers compensation, insurance, odds and ends costs, etc. As a rough rule these come to about 15% to 25% of salary depending on where you live. So, shoot for the middle ground and add 20%: about $74/hour. If you don't like these numbers, figure out what you want to use and re-run the calculation. EDIT: just a note: a lot of businesses actually work on a budget for their staff of 1500 working hours / year. You might also want to take into account an inefficiency / distractions / goofing off factor. NOBODY consistently actually WORKS for 7.5 hours / day. EDIT 2: "what do you want to earn" is what you want to bank - after expenses. The allowance for retirement fund, insurance, odds and ends etc is your costs. If you have other costs, eg capital equipment, paying a book-keeper, etc, then you need to add those on as well. And - long term contract rates are generally lower than short term. Short term needs to include an allowance for job-hunting time / time spent not earning. ROUGH rule of thumb is that for professional, qualified, experienced software and engineering work over a long term (12 months or more), a rate of about $75 to $100 is pretty normal and expected. (This is AUD, but with exchange rates I'd expect USD to be similar, not identical, but in that region). A real hot shot - perhaps $120 to $150, but you better be hot. If the employer provides equipment (eg PCs, compilers, etc) then knock off about $10 / hr. Short term rates (ie 6 to maybe 12 months): add $10 to $20 / hr. EVEN ROUGHER: about $65 to $85 / hour is pretty much considered "mates rates" - ie what you charge your friends. At those rates your accountant is likely to be horrified. PEDANTS CORNER: Rough rule of thumb means just that: rough!
{ "source": [ "https://softwareengineering.stackexchange.com/questions/95946", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/54/" ] }
95,969
Note: This is a question about networking, not gaming. I'm using StarCraft merely as an example. The game StarCraft (not StarCraft 2) supports online play. One person hosts, and other people join. If the host leaves during the game, the game can continue indefinitely without the host. How does that work? Consider the following scenario: I host a 3 versus 3. Note that, for people to be able to join, my router has to be configured to port forward 6112, TCP and UDP (see Blizzard Support ). Five people join. Everyone (myself included) is behind a router. I start the game. Three minutes into the game, about twenty zealots pour into my base. Nobody helps me. My base is eliminated, so I leave (the game doesn't give you a choice). The game continues without the host (the home team has two players left). How do the remaining five clients (who didn't need 6112 forwarded) remain connected? If I understand correctly: if two programs want to talk to each other, one of them has to be a "server" and listen for connections on a port, while the other has to be a "client" and initiate a connection request on that port. They can't simply start sending packets to each other (and I don't even know how they would, if both are behind routers). Someone hosting a game is a server, while those joining it are clients. It's easy to see how the clients can start communicating with the server. What I don't get is: how do the clients start communicating with each other without going through the server? Does the Internet Protocol allow a server to initiate connections between clients? It's entirely possible that, in the case of StarCraft, game traffic goes through Battle.net servers. StarCraft does maintain a connection to Battle.net during games (for messages from friends, etc.). However, I doubt game traffic goes through it, because if it did, why would hosts need to port forward 6112? My question is: can a server, with multiple clients connected to it, initiate connections between them?
That sounds like UDP hole punching Let A and B be the two hosts, each in its own private network; N1 and N2 are the two NAT devices; S is a public server with a well-known globally reachable IP address. A and B each begin a UDP conversation with S; the NAT devices N1 and N2 create UDP translation states and assign temporary external port numbers S relays these port numbers back to A and B A and B contact each others' NAT devices directly on the translated ports; the NAT devices use the previously created translation states and send the packets to A and B In this example, you are S. Your opponents are A and B. When you are kicked from the game, your opponents can continue playing because they had negotiated a connection to each other when they first connected to you.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/95969", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/3650/" ] }
95,976
My team is using clear-case as the version control. The project which I am working is not started 7-8 years back. During the entire life time of the project we had several releases bug-fixes service packs etc. The issues are tracked using the bug tracking system and most of the people who work on the bug-fixes follows a routine of enclosing the comment in START/END block with the date, author, bug-id etc. I feel this is quite irrelevant and making the code cluttered and uneasy to maintain and these are the things which must be part of check-in comments/labels etc, where we can keep additional life cycle information of the work product. What's the best practice to be followed? Some of the reviewers of the code insist to out the comments about the bug and fixes to ease out their life. In my understanding they must review the files by mapping it to a view and get the change log of the branch and review it. It would be helpful if I can get some best practices on submitting the updated code for review.
The problem with adding the bugfix as a comment to the code is, you don't get the full story. If I see a perfectly fine piece of code tagged "this is a fix to the bug blah ", my first reaction would be to say "so what?". The code is there, it works. The only thing I need to know to maintain the code is a comment that tells me what it does. A better practice would be to add bugfix references in SCM commit logs. That way, you see what the bug is, where it was introduced, and how it was fixed. In addition, when the time for a release comes, you can simply extract the SCM logs, and add a bullet point stating that there was a bug, and it was fixed. If another branch or version introduces the same bug, it is easy to locate the fix, and reapply if it is indeed the same thing. Having said all this, I also agree with Charles' answer. If the reason for a piece of code is not obvious, by all means, tell the maintainer that the code is there for a reason, and should be treated with care.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/95976", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/16273/" ] }
96,030
I'm sure you all have heard managers saying that "we need an analyzer", or "we need a designer". While I'm a .NET developer, I hardly can differentiate an analyzer from a designer (not web designer or UI designer). Who is analyzer? Who is designer? Do they overlap?
Analysis: Define the problem. Answer this: "What do we need?" Design: Define the solution. Answer this: "How will we build it?"
{ "source": [ "https://softwareengineering.stackexchange.com/questions/96030", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/31418/" ] }
96,192
Becoming a good analyzer and designer can greatly benefit a developer. But there are definitely obstacles for this. Not everybody is interested in OOAD, and not every person who is interested, knows the path. Should a good OOAD know multiple OO languages? Or should he/she have failed projects? How can one become a good OOAD?
People who aren't interested are creating their own obstacles. I can't worry about that. For those who don't know the path, I suggest: I find that every OO language I learn makes me a better OO developer. Much can be brought from each language into others, as long as you find the community. You do learn more from failure than from success, but try to do that on your own time. Professionally, trust in those with more experience, at least some of the time -- but don't be afraid to ask "Why?". Learn all five of the SOLID principles and understand why they exist. None of them are rules, but they are good guidelines when you're lost. Test Driven Development made more of an improvement to my OO design skills than anything else I've ever learned. You will not be your best until you've gone from underengineering to overengineering and then found the correct balance (closer to the latter). Actually, scratch that, you'll never be as good as you will be two years later. Read a lot of books and blogs but take nothing as gospel. This industry still hasn't found, and may never find, a perfect path. By all means learn design patterns, but don't look for places to use them, simply use them as a facilitator to communication. Hope some of that helps.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/96192", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/31418/" ] }
96,211
I'm doing some data transmission from a dsPIC to a PC and I'm doing an 8-bit CRC to every block of 512 bytes to make sure there are no errors. With my CRC code enabled I get about 33KB/s, without it I get 67KB/s. What are some alternative error detection algorithms to check out that would be faster?
While there may be faster options than CRC, such as Fletcher , if you use them then you are likely to end up sacrificing some degree of error detection capability. Depending on what your performance and error detection requirements are, an alternative may be to use CRC code optimised to your application instead. For a comparison of CRC with other options, see the excellent answer by Martin Thompson . One option to help with this is pycrc which is a tool (written in python 1 ) which can generate C source code for dozens of combinations of crc model and algorithm . This allows you to optimise speed and size for your own application by selecting and benchmarking different combinations. 1: Requires Python 2.6 or later. It supports the crc-8 model , but also supports crc-5 , crc-16 and crc-32 amongst others. As for algorithms , it supports bit-by-bit , bit-by-bit-fast and table-driven . For example (downloading the archive): $ wget --quiet http://sourceforge.net/projects/pycrc/files/pycrc/pycrc-0.8/pycrc-0.8.tar.gz/download $ tar -xf pycrc-0.8.tar.gz $ cd pycrc-0.8 $ ./pycrc.py --model=crc-8 --algorithm=bit-by-bit --generate c -o crc8-byb.c $ ./pycrc.py --model=crc-8 --algorithm=bit-by-bit-fast --generate c -o crc8-bybf.c $ ./pycrc.py --model=crc-8 --algorithm=table-driven --generate c -o crc8-table.c $ ./pycrc.py --model=crc-16 --algorithm=table-driven --generate c -o crc16-table.c $ wc *.c 72 256 1790 crc8-byb.c 54 190 1392 crc8-bybf.c 66 433 2966 crc8-table.c 101 515 4094 crc16-table.c 293 1394 10242 total You can even do funky things like specify using dual nibble lookups (with a 16 byte look-up table) rather than single byte look-up, with 256 byte look-up table. For example (cloning the git repository): $ git clone http://github.com/tpircher/pycrc.git $ cd pycrc $ git branch * master $ git describe v0.8-3-g7a041cd $ ./pycrc.py --model=crc-8 --algorithm=table-driven --table-idx-width=4 --generate c -o crc8-table4.c $ wc crc8-table4.c 53 211 1562 crc8-table4.c Given your memory and speed constraints, this option may well be the best compromise between speed and code size. The only way to be sure would be to benchmark it though. The pycrc git repository is on github , as is its issue tracker , but it can also be downloaded from sourceforge .
{ "source": [ "https://softwareengineering.stackexchange.com/questions/96211", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/6122/" ] }
96,303
Related : As a beginning programmer, should I favor building my own libraries over using 3rd-party libraries? As an intermediate to advanced level PHP web-developer and a young one at that (15yrs.), should I build an entirely new CMS for my website or rely on pre-made software such as WordPress or Drupal? I don't exactly feel that WordPress is powerful/scalable enough to be able to handle what I'm trying to do and after tinkering with Drupal quite a bit over the last few days it just doesn't seem up to par for my liking and also isn't well documented so I'm having a bit of trouble getting it to do certain simple tasks. I would love to use something such as ExpressionEngine, but I don't have the money to dish out for a commercial license which is currently at around $300, so that's a no-go. I originally started coding my site as a temporary system so my users could purchase upgrades and other things such as in-game currency until I could get a new website up, but after I started on it I quickly realized that I needed to make it scalable, so from then on I coded with the thought of making it a full-functioning website in mind. It only took a couple of days but all of the basics are there (register, login, account modifications, etc.) and I believe it would be a great start for a private CMS. The last thing I want to add is if I were to build my own CMS, should I make use of a PHP framework such as CodeIgniter, which I have quite a bit of experience with? Maybe others developers could clue me in as to what I may be getting myself into.
It totally depends on your situation. Great sites has been made with CMSs out there. I think we'd better first understand two concepts, CO, and TCO. CO (Cost of ownership) When you buy something, the amount you pay for that is CO. In case of WordPress, it's nothing, cause WordPress (or many other CMS software) are free of charge. TCO (Total Cost of Ownership) Imagine that you buy a PC for 10000 dollars. Then what? You should spend time for installing software on it (time is a valuable resource, thus you're kind'of still spending money on that PC), then you might purchase a webcam, another DVD writer and some extra cables. Then you need to learn how to use installed software. In other words, Total Cost of Owning a PC is not $10,000. It's much more than that. In case of CMS software, their CO is really nothing to very low rates. But many experiences show that their TCO is not low at all. Big companies spend almost thousands of dollars to get a good site in Joomla, or WordPress. Another factor is the level of customization . Sometimes you want to use a software as is without any change. In that case, WordPress, Joomla, Drupal or any other CMS could be a very good candidate, and you shouldn't write your own CMS. But there are times that you need a high level of customization. In that case, you really become frustrated to customize a ready-to-use CMS to suit your requirements. I actually wanted to use ready CMS software, but after a spending a valuable time on learning different CMSs and finding weaknesses of each one, I ended up creating my own CMS. Thought Results is my personal site and is built via this CMS. I'm gonna publish it soon, so others can also use it. Still another factor is extensibility . Believe me, it makes you old to take a CMS from static state to extensible state. Templates, modules, plugins, providers, database and storage, routing mechanisms, and almost any part in a Good CMS should be extensible. At last, my personal suggestion is to start building a CMS, so that at least you learn some of the most fundamental concepts behind it. But also try to use existing ones. Good luck.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/96303", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/32480/" ] }
96,331
I am really confused with this. I believe that the more I am getting experience, the more I am becoming an expert at finding mistakes and fixing them quickly. Now my boss got website from a programmer who does very very bad coding. Now he sends the list of problems to fix. Suppose it's the stylesheet problem, and the old guy does not know how to fix it, but due to my experience I know straightaway what the problem is, and I can fix it in two minutes and many similar problems like that. But after fixing all that I realise that I fixed all problems in 15 minutes which other guy was not able to solve. I get 25$ per hour, so I feel very bad charging 6$ for that list of things which took many years of experience to learn. Is it OK to charge 6$ or should there be some way to charge things?
Such a good question because it is a problem we all face as freelancers. When I made the transition to being a freelancer, the hardest thing for me to develop was a time tracking discipline. For the first year or so, I just focused on project-oriented work, and really only bothered with timers when I was "in the zone" of coding. In time I learned what a huge disservice to myself it was for me not to track as much of my day as possible. Even as I write this comment, I have a timer running entitled, "Blogging on Stack Exchange." But more on that in a second. First let's address your question. As it relates to time tracking, one of the things I found as a freelancer is that there were certain clients who tended to have lots of little issues. As an amateur, and because I felt I was being a "good guy," most of the time I wouldn't even bother billing the client. Taking two minutes to fix a problem, which sometimes is all it takes, seems hardly worth the effort to start a timer. What I found however, is that over the course of a month, it was not just one 2 minute problem, it was 10 or 20 two minute problems. Taken by themselves, it was no big deal. Taken in aggregate I was leaving money on the table. But more than that, the client had no visibility into the quantity of work I was doing for them. As a result they tended to either a) undervalue my work, b) take advantage of me, or c) just take me for granted. This is not a good relationship to have with anyone, especially a client. Next, and as someone else pointed out. Nothing really takes two minutes. There is email, the phone call, the logging into the bug tracking system, and all of the other artifacts of a good process. The process, the customer service in speaking with the client on the phone, is all part of the value you provide, and thus should be something you are compensated for. And clients should know how much time is spent by you on the phone and answering email. There was one time I presented an invoice to a client that showed how much time was spent on the phone with them. They later told me that they had no idea, and that they worked to curb their tendency to default to calling me on the phone when they had a question. A fact I appreciated given how disruptive a phone call can sometimes be. I also agree that you should bill in reasonable increments. I bill in 15 minutes increments, which is just a fancy way of saying, "I have a 15 minute minimum on any issue you want me to tackle." There are many reasons for this, but for me, the biggest reason is the hidden cost of context switching. For me to go from one task to another is not instantaneous. If only it were. Moving from one task to another often can involve me stopped to check email, go to the bathroom, look at G+/Facebook/Twitter, etc. One could say that I lack discipline, but for me this is integral to the process of me switching gears. Therefore, if I have 4 tasks on my plate that each take 15 minutes each, it doesn't take me an hour to complete them, it takes me about 1.5 hours. And that additional 30 minutes, is the hidden cost of context switching. And my clients pay for that through my minimum billable increments. Many people have also mentioned and talked about the additional value you provide as a more experienced programmer. That fact that it takes you half as much time to perform the same task as a colleague is reflective not only of your superior experience, but also of a better process you have built for yourself in managing your clients. This all speaks directly to the value you provide and you should compensate yourself fairly for it. This requires you to understand what your competitors are charging relative to the quality of their work. Personally, I maintain close relationships and friendships with the other freelancers in my field, which gives me insight into this problem and allows me to adjust my rates accordingly. If you find that by and large you produce the same quality work in less time, then by all means charge more. If your clients can't afford it, then look for new clients and move up in the world. Leave the penny pinching clients, and the clients who don't value work provided to them by their freelancers to smaller fish. Refer those clients to other freelancers you trust and make them someone else's problem while you work on building up a clientele that pays you more fairly. The last thing I wanted to share was something no one else really touched upon that I could see. Sometimes comping the client for the 2 minutes of work is the right thing to do from a client management perspective. Sometimes, giving them that time is what helps you build trust with the client, and firmly establishes you as the go-to person for them. It might also help you secure larger and more profitable projects in the future. Knowing when to charge and more importantly when not to charge is the hard part. But when I make the decision not to charge a client, I do go out of my way to tactfully tell them that this is "on the house." I tell them that I appreciate all the business they send my way, and that I don't mind taking care of this one issue for them. Its the least I can do, I tell them. They are usually very appreciative, and I feel it helps strengthen our relationship. Now permit me to return to the timer currently running on my desktop entitled "Blogging at StackExchange." This is not directly related to your question, but helps underscore the importance of maintaining a discipline with keeping accurate track of your time. From a business perspective, the most important metric you can track is profitability. Knowing how much time is spent doing billable vs. non-billable work is very important. It helps you establish and understand how much overhead exists in running and maintaining your business. It also helps you to identify ways in your business and process you can improve. If you realize at the end of the quarter that you spent a lot more time than you thought "blogging at Stack Exchange" and it came at the expense of actual billable work, then you might want to consider spending less time doing it. With regards to profitability though, what I find is that there is A LOT more time that goes into a project than the time that is spent coding. Not only is there all the email, and other tasks mentioned before, but their is the time spent securing the deal, billing the client, negotiating contracts, and the like. Much of this time is not billable, but knowing how much time you spend doing this might help you identify ways to streamline your business, and increase profitability at the same time. Let's say for example you charge $100 per hour, but that you spend roughly 50% of your time doing administrative non-billable work. Perhaps there is a person out there you could hire at a rate of $50/hour to take that administrative work off your hands. Then you could spend more time coding, AND increase your bottom line at the same time. Its a win-win. You are giving someone else valuable work, you provide a better service to your clients almost certainly, AND you make more money. And there you go, 0.79 hours spent "Blogging at Stack Exchange." I will chalk that up to my marketing budget. :)
{ "source": [ "https://softwareengineering.stackexchange.com/questions/96331", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/30851/" ] }
96,356
A surprising number of quality, scalability, and load problems have been occurring on an application I currently support that I did not originally write. Thankfully I have new projects that I have been doing from the ground up to retain some semblance of my sanity. The original team consisted of 20 some developers (most of them with outdated skill sets), no business requirement documents or quality assurance testers, and poorly managed from the very beginning in a waterfall fashion. The early days of production were an embarrassing nightmare that involved patching brittle procedural-like code with even more brittle fixes. Features were added later that were sledgehammered into a datamodel that was never meant to support them and it is not uncommon to see the same code duplicated 10 times over and to see resources not being safely closed and ORM queries that fetch tens of thousands of entities just to throw out all but a handful. It is just me now and everytime there is a new problem that crops out I rewrite a module to better standards and make it MUCH more stable but Management needs a proper explanation as to why all of this is occurring. They seem shocked and perplexed at the notion that this application is of poor quality and drowning in technical debt. Fortunately they understand the concept of technical debt and support me in my quest to eradicate it and they are very supportive and appreciative of me, but I feel as if I just keep blaming the original team (who all left to ruin another project in a different division). The bottom line is that I don't want to be "That Guy" who always complains about the developers on the project before him. I have seen this attitude before from people in my career who I personally felt were being ignorant and not considering the circumstances and design influences that encouraged things to be the way that they were. Usually I see this attitude of blaming the previous team for poor design and implementation from idealistic junior developers who have not had the life experiences that more senior members have had and benefitted from. Do you feel that there is a better way, perhaps softer way of doing of reporting these kinds of problems to management without stepping on the reputation of the person/team before you?
Technical debt is like financial debt. You take it on (hopefully) strategically in the development of a program with the intention that it will be paid off in the future. Sometimes people make poor technical debt decisions (such as running up a credit card), but sometimes a certain amount of technical debt is just normal. Deciding not to devote the time to make something the "right" way today with the assumption that it will need to be changed in the future is completely normal and should be anticipated. Of course there is a fine line, but thinking that you will make it the right way the first time can cause its own set of problems (analysis paralysis). Bottom line, any non-trivial project that lasts more than a couple of years will need to devote some new development time paying down technical debt. The thing is, this is true even if you write your application the right way . If you don't you are piling debt on debt, and management can certainly understand that if you present it that way. Explain this to management and instead of "blaming" the previous team all the time you can present this as "business as usual".
{ "source": [ "https://softwareengineering.stackexchange.com/questions/96356", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/25476/" ] }
96,380
We're currently in a situation where we have a choice between using an out-of-the-box object-relational mapper or rolling our own We have a legacy application (ASP.NET + SQL Server) where the data-layer & business-layer are unfortunately mashed together. The system isn't particularly complicated in terms of it's data access. It reads data from a large group (35-40) of inter-related tables, manipulates it in memory & saves it back to some other tables in a summary format. We now have opportunity for some refactoring and are looking at candidate technologies to use to seperate & properly structure our Data Access. Whatever technology we decide on we would like to: have POCO objects in our Domain Model which are Persistence Ignorant have an abstraction layer to allow us to Unit Test our domain model objects against a mocked up underlying datasource There's obviously lots of stuff out there on this already in terms of Patterns & Frameworks etc. Personally I'm pushing for using EF in conjunction with the ADO.NET Unit Testable Repository Generator / POCO Entity Generator . It satisfies all of our requirements, can be easily bundled inside a Repo/UnitOfWork Pattern and our DB Structure is reasonably mature (having already undergone a refactor) such that we won't be making daily changes to the model. However others in the group are suggesting architecting/rolling our own D.A.L. completely from scratch. (Custom DataMappers, DataContexts, Repository, Interfaces everywhere, Dependency Injection overkill to create concrete objects, Custom LINQ-to-Underlying Query Translation, Custom Caching Implementations, Custom FetchPlan Implementations...) the the list goes on and to be frank is strikes me as madness. Some of the arguments been thrown about are "Well at least we'll be in control of our own code" or "Oh I've used L2S/EF in a previous project and it was nothing but headaches". (Although I've used both in Production before and found any issues to be few and far between, and very manageable) So do any of your uber-experienced devs/architects out there have any words of wisdom that might help me steer this product away from what seems to me like it's going to be a complete disaster. I can't help but think that any benefit gained by dodging EF issues, will be lost just as quickly by attempting to re-invent the wheel.
Both arguments against existing ORMs are invalid: "Well at least we'll be in control of our own code" Why not write your own language? Your own framework? Your own operating system? And to be sure you're in control of everything, it's also a good idea to create your own hardware. "Oh I've used L2S/EF in a previous project and it was nothing but headaches" EF is mature enough and was used successfully by plenty of people. It means that a person who claims that EF is nothing but a headache should probably start learning how to use EF properly. So, do you have to write your own ORM? I wouldn't suggest that. There are already professional-level ORMs, which means that you have to write your own only if: You have a precise context where all existing ORMs cannot fit for some reason, You are sure that the cost of creating and using your own ORM instead of learning an existing one is much lower. This includes the cost of future support, including by another team of developers who would have to read hundreds of pages of your technical documentation in order to understand how to work with your ORM. Of course, nothing forbids you to write your own ORM just out of curiosity, to learn things. But you should not do it on a commercial project. See also points 2 to 3 of my answer to the question Reinventing the wheel and NOT regretting it. You can see that the different reasons for reinventing the wheel don't apply here.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/96380", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/32505/" ] }
96,386
I sometimes see the term "Bootstrapper". I saw it last when creating some learning exercises for Prism. I stumbled upon UnityBootstrapper class. My question is: when would you call a class a "Bootstrapper"? Why? What does it say about the class?
In your specific example you're talking about a Dependency Injection Container Bootstrapper. This is where you configure all of your instances and generally prepare the container for use. It also ends up being where most of the coupling in your application resides (it has to reside somewhere), but this is a side effect, not the purpose. In more general terms, a bootstrapper is just a class or method which prepares/configures a group of classes/objects or an entire API for your specific needs and use.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/96386", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/32323/" ] }
96,411
I am learning Python and am intrigued by the following point in PEP 20 The Zen of Python : There should be one-- and preferably only one --obvious way to do it. Although that way may not be obvious at first unless you're Dutch. Could anyone offer any concrete examples of this maxim? I am particularly interested in the contrast to other languages such as Ruby. Part of the Ruby design philosophy (originating with Perl, I think?) is that multiple ways of doing it is A Good Thing. Can anyone offer some examples showing the pros and cons of each approach. Note, I'm not after an answer to which is better (which is probably too subjective to ever be answered), but rather an unbiased comparison of the two styles.
Compared to languages like Perl, Python has a limited number of control constructs: only if and no unless , only for that iterates over sequences and no foreach or C-style for , only while that checks a condition every loop and no do-while , only if-elif and no switch , there's only one comment construct, the # , and for every line you can tell if it is commented out or not, without looking at previous lines. Also, there's nearly one way to indent your source; most cases of creative indentation are syntactically excluded. This makes parsing a Python source easier for humans. There are attempts to be minimal-but-complete in built-in types and the standard library. for mutable list you use the only built-in list type; it's O(1) for most operations, and you never have to choose the right implementation, for immutable lists, equally, you just use the tuple type, for maps, you use the only built-in dict which is damn efficient in most cases, no need to ponder which implementation to use. Python 3 extends this to integers: no matter how big your integer is, you use the same type and never care about coercion. Python tries to avoid syntactic sugar. But sometimes it adds syntactic sugar just to make the obvious way obvious. You can write if foo is not None instead of if not (foo is None) because 'is not' is special-cased. Still foo is not None reads easily, can't be misinterpreted, and you don't have to think, you just write the obvious thing. Of course, most of more complex things in Python can be done in several ways. You can add methods to classes by declaration or by simple slot assignment, you can pass arguments to functions in a number of creative ways, etc. That's just because the internals of the language are mostly exposed. The key is that there's always one way which is intended to be the best, the cover-all case. If other ways exist, they were not added as equal alternatives (like if and unless ) but merely expose the inner workings. Slowly but steadily such alternatives are obsoleted (not eliminated!) by enhancing the known best mechanism. Decorators wrap AOP function calls. Before 2.6 you had to use __metaclass__ magic member to declare a class's metaclass; now you can use the same decorator syntax for this, too. Prior to 3.0 you had two sorts of strings, byte-oriented and Unicode, which you could inadvertently mix. Now you have the only Unicode str and the only binary-transparent bytes , which you can't mix by mistake.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/96411", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/32514/" ] }
96,504
I have been asked to take a person in our IT department who has no programming experience but is a smart and capable person and help him move into programming at lets say an entry level developer supporting existing .Net applications. I definitely believe this person can do it but I am looking for the quickest way to get him to speed. I have a bunch of ideas but wanted to see what other people thought. I know this is going to also be highly dependent on how he learns, but I am talking in general. So the question: What do you think are the best ways to get a non-developer up to speed quickly on development (.Net in this case)?
I usually prescribe the same sequence for anyone who wants to learn programming. It's very theoretical, but it lays a good foundation. It should take three or four months of fulltime study, but programming isn't something you learn overnight. If you can't get through this sequence, you're not going to be able to program, so you might as well give up now. Code, by Charles Petzold. The C Programming Language, K&R The Structure and Interpretation of Computer Programs, Abelson and Sussman. My rule is: work your way through those three books by sheer force, if necessary. Ask any questions you have, but only after struggling to figure it out on your own. If you can make it through those three books, congratulations, you're a programmer, now I can throw you PERL IN 15 MINUTES or whatever rubbish is at the bookstore in the Big Bookcase of Java and you will manage fine. If you can't make it through those three books, give up, go home, you're never going to get it. I don't think you need to learn C and Scheme, to be honest. They're just a foundation for future learning. These two fairly simple books are both very simple on the surface (C and Scheme are super-easy languages) but they get very deep on the real art of programming without wasting time on confusing syntax, so they are excellent to start to re-wire your brain to be a good programmer. Attempts to take a shortcut and go directly to learning the exact thing you want to learn right now (like starting with C# and ASP.NET) are doomed.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/96504", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/7792/" ] }
96,524
I find that when someone asks what is the best way to learn how to program, people usually provide them with references to a bunch texts written by various authors. However I don't believe many people at all learn to program from books. I find that they are usually faced with a challenge and then use programming as a tool to overcome it. For example I 'got into' programming because I wanted to start a server for a game I was playing, so I googled and read through the support for that particular server and now I am a employed software engineer, using only the skills I developed (and then further developed) by coding C# scripts for a not very popular server package. So my question is, do people generally find it easier to learn from these books? I know I have looked at a few of them and found them far too 'dry' to encourage me to finish it.
Here's how I learn, generally speaking: Buy a book Don't read it cover to cover but know where everything can be found Find a pet project to work on Learn from experience, but use the book as a reference Where the book fails, there is always google Note: the third point sometimes comes first. Edit : To answer the question "Why?" Google is great to find out how to do something, but it's not great for finding out what you don't know. Why would you ever google "C# delegates," if you didn't first know that C# has a concept called delegates and that it might be useful to solve a problem you're working on? Also, the signal-to-noise ratio can be a bit high sometimes. If you have a rough idea in your head how something is done then you can easily confirm whether the article you are reading is correct. But if you've no clue... you can end up in a bigger mess.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/96524", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/20681/" ] }
96,630
I am working on a fairly large and buggy application - and due to the way it's written (I'll spare you details, but it violates rules in most areas you can think of), it is next to impossible developing it without major refactoring. Significant part of the app was authored by interns, n00bs etc.; but there has also been a programmer in the rank of Master Developer, and with all humility, the code he left behind is dubious as well, in a different way perhaps but still. Granted, his code tends to get the job done - most of the time - but it's typically cryptic, reinventing the wheel (eg. a big custom method accomplishing a rather ordinary SQL db backup) etc. Basically, needless confusion plus lots of overengineering And it got me thinking that being a highly skilled coder (I deliberately don't use the word "developer", assuming that it indicates a wider set of skills), if not accompanied by other qualities, can actually be sort of poisonous. Assuming that it's true, some of the reasons I could think of are: if you're coding with ease, it feels (or actually is, in short run) just quicker to snap out your own solutions on the spot, without turning to libraries, preexistent functionality etc. if one is experienced enough to easily maintain a mental image of a complex program, one is less inclined to split it into modules, layers etc. So my point is that if a fluent coder happens to be a bad developer, their fluency not only doesn't compensate for the latter, but it actually does even more harm instead. What do you think of that? Is it true (to what extent if so)?
if you're coding with ease, it feels (or actually is, in short run) just quicker to snap out your own solutions on the spot, without turning to libraries, preexistent functionality etc. Yes. I've been that guy. And I've learned that it's a terrible thing. It's all very well for you, you don't have to learn something new. But what about the rest of your team? They become very reliant on you. They cannot google for "Clive's Quicky ORM" to get help on the object-relational mapper you've written. And then comes the day they need to hire someone new and they can't look for people with experience in Clive's Quicky ORM. And finally comes the day when you leave and somebody notices a bug in your ORM. And it will be there, because you don't have a whole community of people testing and fixing your product. Yes, learning Hibernate might have taken more time than writing something lightweight. But the benefit of doing so is far too great to ignore, IMHO.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/96630", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/29029/" ] }
96,729
Recently, I was approached by a local ad agency with a job opportunity. They are bringing all web/interactive development in-house and adding to their development team. I'm growing sick of my cushy, yet boring corporate job, and am intrigued by the position. Having only worked for software shops where the primary business was making software, I worry that they may not put emphasis on quality software practices, since development is not the focus of their business. Could anyone with experience at both compare/contrast working at a software company with working at a company that just happens to have an in-house software development team or department?
It will depend on the company. But usually, if it isn't their main focus, the software will be of lesser quality. The process, if they have any, will be less stringent. The testing non-existant. And the work overall less technically challenging. They'll want it to work, and work now, and that'll be good enough. But some places are hip about software development, even if they're mom & pop shops doing something else entirely. It depends on the business leadership being open to good ideas, the tech leadership knowing enough to do it right, and having people that can explain a good idea. Which might be you. Interview the company. Ask them if the know of/adhere to the Joel test. Most of them are good points. See if they understand technical debt and the mythical man-month. Who is your project manager, what process does he use, and how geeky is he?
{ "source": [ "https://softwareengineering.stackexchange.com/questions/96729", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/10311/" ] }
96,768
I work at a Microsoft shop doing mainly web development. We had a client who asked us to review (improve) the data model for his web app, but said that he wants to develop his app in PHP (he knows "a guy" who can do it). When I asked him why he wants to go with PHP, he gave me the standard set of arguments from the 90's: Microsoft is evil, and PHP is free Writing an ASP.NET app is more expensive (software-wise) Why would Facebook use PHP if it was a bad idea? [classic] He had a few more comments about the costs associated with going .NET. The truth is that "Microsoft is expensive" does not hold water any longer, with their "Express" suite, you can develop an ASP.NET app without paying anything for software. When it comes to hosting, you can save a few bucks with PHP over .NET, but that's a small fraction of the projected development costs (we quoted 10-15k). Going back to my question, what arguments would I give to a client in favor of ASP.NET over PHP? [please provide sources for quantitative claims]
Just Tell him the truth.. You are not a PHP shop. (That's reason enough why YOU can't do it in PHP) This is the price you are quoting for .Net. If he can beat that elsewhere, so be it. It's a horrible sales tactic to knock down your competition based on the platform used. (Even if it has a lot of weight in the clients mind) Sell yourself, Sell your Strengths, admit where you lack expertise. You will win the job on your merits. "We can build great Websites, look at our portfolio, Look at our track record, check our references. We can do what you asking and we can do it at a fair & competitive price, But we can't do it in PHP."
{ "source": [ "https://softwareengineering.stackexchange.com/questions/96768", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/5925/" ] }
96,888
As an independent vendor being contracted to write a product for a company, is it reasonable to ship only the source code and user documentation while leaving out design documents, architecture diagrams, unit tests etc (basically anything else not strictly needed to run the product or extend it)? The goal is to make the end product extensible for the client so they can further its development internally, but not for free. They would have dig deeply into source to make sense of some design decisions etc, and they'll be responsible for writing their own comprehensive tests to guard against regressions being introduced. The idea here is not to make the code unintelligible. I would just like to create future opportunity for being contracted to write extensions by virtue of the "inside knowledge" and expertise I have from being the original author. Would this be considered unethical? Edit: The contract is currently being negotiated, so the issue of what constitutes final deliverables hasn't been firmly decided. Additionally, I should have mentioned I will maintain ownership of the product. I will only grant to the client a usage license. Does this detail make a difference in whether this is regarded as bad form?
Contractually they'd be smart to include some sort of clause about the scope of the documentation they want. They should compensate you for presenting them in an understandable and professional format (redo the napkin sketch). It's not unethical to only do what is asked and paid for, but to go out of your way to withhold information is just wrong. If you give me a poorly documented and convoluted app that I can't work with, I'm more inclined to think you're a bad programmer than some sort of genious my company can't do without. Build a reputation for doing things right.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/96888", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/32706/" ] }
96,915
Git is an excellent version control system. If we exclude the fact that, it doesn't have an excellent GUI support, it's really good and fast. But the source controls like Clearcase has large support for enterprise customers. Companies are investing huge amount for source control servers and licesense. Of late most of the large companies like Google are adopting Git over other version control system. But this company has a strong open source group which consistently provide development and support for the tool (They might even have a custom version of Git of their own). At the same time, large companies are not really bothering about adopting open source projects and making them relevant to them. Is Git really a reliable tool for enterprise environment, especially on Windows Platform? The support is in question for Git as it's an open source product. Are there any company that provides solutions and support? How are the server costs compared to other version controls like Clear-case?
GitHub is NOT a version control -- it "hosts" the version control system called "Git". Aside from the pun, this is a very important difference -- know it well. Regarding enterprise use, I can tell you that git is as smart (and convenient, and better) as something like SVN. And, you can choose a suitable version control strategy (workflow) based on the size and scope of the project (and your team). Non-distributed systems can't afford you this flexibility. For Windows, check out Msysgit or Visual Studio Extensions for Git -- git works very well on Windows. Also, windows users look at this training series from TekPub -- it is all Windows. UPDATE [Feb 2013] getting started with git in visual studio Your question is not uncommon, and you could google about it and get a lot of text explaining you why and how (and whether) to use git in the enterprise. Go read this , this , and this Look at GitLab , a self-hosted application that provides an interface similar to GitHub Look at gitolite project Look at Git Hosting options: GitHub Enterprise Gitorious Git Enterprise unfuddle Still don't like Git? Look at another DVCS called Mercurial .
{ "source": [ "https://softwareengineering.stackexchange.com/questions/96915", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/16273/" ] }