source_id
int64 1
4.64M
| question
stringlengths 0
28.4k
| response
stringlengths 0
28.8k
| metadata
dict |
---|---|---|---|
47,860 | Honestly, I hate the word "Pythonic" -- it's used as a simple synonym of "good" in many circles, and I think that's pretentious. Those who use it are silently saying that good code cannot be written in a language other than Python. Not saying Python is a bad language, but it's certainly not the " end all be all language to solve ALL of everyone's problems forever! " (Because that language does not exist). What it seems like people who use this word really mean is "idiomatic" rather than "Pythonic" -- and of course the word "idiomatic" already exists. Therefore I wonder: Why does the word "Pythonic" exist? | Those who use it are silently saying that good code cannot be written in a language other than Python. No, those who use it are saying "this looks like good Python code". Nothing more, nothing less. It applies in the context of Python code. It's used to contrast code that uses Python idioms to code that doesn't use Python idioms. Yes, if you were to write Python code as though it's, say, Java, it would probably be described as "not Pythonic". This is not to say that Java code written like Java code is ugly, or that anything not written in the Python style is ugly; it's to say that Python code not written using Python idioms is not Pythonic. "Pythonic" is synonymous with "idiomatic", but more specifically, it's synonymous with "idiomatic Python". "Pythonic" does not say anything about code written in other languages. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/47860",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/886/"
]
} |
47,979 | Five years ago, I lost my ability to concentrate long-term, and therefore ability to code with professional efficiency. I know why it happened, I understood how it happened, and on top of being able to re-create my calm and thus relaxed focus, I overcame the original (rooted in childhood) reason why my mind tilted on the overall situation back then; My understanding isn't rooted in words that a psychologist told me, I actually grokked them first-hand. I'm pretty much confident to be able to churn out productivity, possibly even more so than pre-burnout. I also never lost my interest in code nor did I stray from trying to get my abilities back; I kept my knowledge up to date (I could always relatively painlessly learn things coding-related, just not apply them) and thus can say that I'm a better developer than before, even if my average LOC-count over those years is abysmally low. On the other hand, now I have a biography that includes more time on the dole than in a job. What would convince you, as an employer, to give my application a chance? I don't believe I should just keep the whole topic out of it. EDIT: I think I should add that I didn't start searching my childhood for causes, it was the solution that pointed me to the cause. Alas, not doing my best to fix the company (which was disintegrating as I left, and completely disbanded a year later) is deeply rooted in the fact that in my childhood I, at one point, gave up on fixing my parent's relationship, misattributing it to my own failure at empathy instead of them being an utterly hopeless case, and I don't really see a way to explain that without referring to childhood. I'm perfectly able to say that without breaking out into tears, though. That said, yes, I'm aware that I'm writing to you guys as friends, not employers, right now, that's the reason why I show my distrust in you by using a one-time account .oO( ... ) Yes, I'm planning to release some OSS code before I apply anywhere. EDIT 2: I'm German and going to apply at a German company, so "creative truth" is definitely not an option. As to the chosen answer: I chose Renesis over Pierre because, while the latter did an awesome job at motivating, getting across that I shouldn't give up and giving me points to beef up my social skills section with, Renesis actually answered my question by summing up the involved key factors. | The best approach is to find a way to describe your situation in a learned-from-my-mistakes, learned-what-not-to-do way. Don't rationalize. No employer is going to be happy to hire an employee who spent five years on the dole if they completely rationalize it away. You may feel like this was necessary — and at some point it did become necessary for you. But, the bottom line is you would have been better off if you hadn't needed it, and realizing what could have saved you is the perspective an employer is going to want to see that you have. Don't be overly emotional about it. Specifically, don't point to childhood, as that will conjure up rationalization red-alarms in the mind of the employer. (What you've told us here, is what you'd say to a friend. Employers are fundamentally different, they have to be.) Employers want employees who are stable, and unfortunately, illustrating the emotional side of it to your employer may make them jump to conclusions that you have a lack of stability. As far as your skills being sharp, just show it. Don't talk about all the research you did or how quickly you learn (that could very easily mean nothing) — simply, ace your tech interview. And make sure to ask them smart questions, to show you are inquisitive and analytical. Don't be discouraged — there are many potential candidates out there interviewing with short histories. The important thing to remember is potential employers have very little time to get to know you — so in some respects, they have to jump to conclusions. Your job is to help them jump to the best ones. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/47979",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/17332/"
]
} |
48,100 | I recently finished my Master of Science in Software Engineering, and I am about to start my professional career in a few weeks. My role will be as a Junior Developer for a company which develops software using Java & related technologies (among them Spring and Hibernate). To be honest, I am really excited about what is coming, especially because I want to develop my career as a Java developer. I am also very interested in gaining experience in the field. Additionally, this is going to be my first work experience as a professional developer so I really want to do my best from the very beginning. I would like to know which skills and abilities, both soft and technical, would be most appreciated in a new professional (Junior Developer) that could be part of your team and in which skills I should focus on to achieve a successful career as a Software Engineer. | A lot of these are true no matter where you are in your career, but might be especially important for someone who is just starting out. Listen more than you talk. Learn from what other people are saying. Be humble. Don't be afraid to share your ideas, but don't assume that you're right and everyone else is wrong. If you see something you think is wrong, ask about it, don't make pronouncements about it. Keep learning. The foundation from your education is (should be) great, but you have only begun to learn the profession. Learn by doing. I don't think you can really learn unless you are actually practicing what you are learning. The customer rarely knows what he wants until he sees it. Get used to your requirements changing. Adopt a style of development (if allowed) that gets things in front of the customer quickly to get feedback. Find people who are good at working with customers and ask them to mentor you in how to develop relationships with them. Write well-tested, robust code. Getting it done is not the goal; getting it done right is. If you're any good at it, speed will come with time. Work hard. Don't wait to be asked to do something; look for or ask for things to do. Own up to your mistakes or your team's mistakes. Don't throw your team members under the bus in front of the customer, but be honest when you have code problems. You may think that your teammates want you to be a brilliant coder. That would be awesome, but your teammates really want you to be competent and not a jerk. If you're going to be a jerk, you'd better be brilliant. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/48100",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/17371/"
]
} |
48,237 | My friends and I have been struggling to classify exactly what is an integration test. Now, on my way home, I just realised, that every time I try to give a real world example of an integration test, it turns out to be an acceptance test, ie. something that a business person would say out loud that specs out what the system should be delivering. I checked the Ruby on Rails documentation for their classification of these testing types, and it's now completely thrown me. Can you give me a short academic description of an integration test with a real world example? | At the moment I like this statement : "It’s not important what you call it, but what it does" made by Gojko Adzic in this article . You really need to specify with the people talking about the tests what you intend to test. There are a lot of people having different views, depending on what their role is. For testers a general accepted test methodology in the Netherlands is TMap . TMap makes the following distinction. unit test unit integration test system test system integration test acceptance test (all kinds/levels) functional acceptance test user acceptance test production acceptance test They have more specific kind of tests that can be performed within the above mentioned tests. Look at this word doc for an overview. Wikipedia also has a nice overview . The book the pragmatic programmer says: a unit test is a test that exercises a module integration tests show that the major parts of a system work well together Looking at these different sources and putting in some of my own experiences and opinions I would start by making distinctions by three categories who does the testing in general what is tested what is the goal of the test Unit test : test logic in classes by programmers to show code level correctness. They should be fast and not dependend on other parts of the system that you don't intend to test Functional acceptance test : test use case scenario's on a limited (specially created) data set done by the test department to show that every specified scenario works as specified. User acceptance test : test use case scenario's on production like data done by representatives of the users to make them formally accept the application Integration test : Test communication paths between different parts of the module done by the test department or by developers to show that all modules work correctly together. My list above is just a start and a suggestion but I really do think: "It’s not important what you call it, but what it does" Hope this helps. 26-10-2016 Edit: Just recently a very nice introduction was placed on YouTube Unit tests vs. Integration tests - MPJ's Musings - FunFunFunction #55 | {
"source": [
"https://softwareengineering.stackexchange.com/questions/48237",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/15177/"
]
} |
48,401 | The typical reason I hear to why people bash C++ is that they don't actually know C++, they just know "C with classes", which apparently is different. I am just beginning to learn C++, however, I want to actually learn C++ and not simply "C with classes". How can I ensure I learn C++ properly? Some examples would be wonderful. | For one thing, use the STL. Above all, know your containers ( vector , deque , list , map , set , &c.) and their performance characteristics. Have a solid understanding of where and how to apply even the basics ( accumulate , transform , remove_if ) of the algorithmic primitives defined in the <algorithm> header. Understand that C++ is a multi-paradigm language, and don’t try to force everything into the OO model. If something you’re doing isn’t plain, legible, and type-safe, chances are you're doing it the C way. Learn the basic standards of type safety, const correctness, reference semantics, and RAII, all things that subtly but profoundly set C++ apart from C. Keep up to date on current developments in the language (type inference with auto , lambdas, rvalue references) and apply them to improve the clarity and quality of your code. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/48401",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/7682/"
]
} |
48,413 | Java allows marking variables (fields / locals / parameters) as final , to prevent re-assigning into them. I find it very useful with fields, as it helps me quickly see whether some attributes - or an entire class - are meant to be immutable. On the other hand, I find it a lot less useful with locals and parameters, and usually I avoid marking them as final even if they will never be re-assigned into (with the obvious exception when they need to be used in an inner class). Lately, however, I've came upon code which used final whenever it can, which I guess technically provides more information. No longer confident about my programming style, I wonder what are other advantages and disadvantages of applying final anywhere, what is the most common industry style, and why. | I use final the same way as you. To me it looks superfluous on local variables and method parameters, and it doesn't convey useful extra information. One important thing is that strive to keep my methods short and clean , each doing a single task. Thus my local variables and parameters have a very limited scope, and are used only for a single purpose. This minimizes the chances of reassigning them inadvertently. Moreover, as you surely know, final doesn't guarantee that you can't change the value/state of a (nonprimitive) variable. Only that you can't reassign the reference to that object once initialized. In other words, it works seamlessly only with variables of primitive or immutable types. Consider final String s = "forever";
final int i = 1;
final Map<String, Integer> m = new HashMap<String, Integer>();
s = "never"; // compilation error!
i++; // compilation error!
m.put(s, i); // fine This means that in many cases it still doesn't make it easier to understand what happens inside the code, and misunderstanding this may in fact cause subtle bugs which are hard to detect. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/48413",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/8331/"
]
} |
48,419 | There is the classic OOP problem of method chaining vs "single-access-point" methods: main.getA().getB().getC().transmogrify(x, y) vs main.getA().transmogrifyMyC(x, y) The first seems to have the advantage that each class is only responsible for a smaller set of operations, and makes everything a lot more modular - adding a method to C doesn't require any effort in A, B or C to expose it. The downside, of course, is weaker encapsulation , which the second code solves. Now A has control of every method that passes through it, and can delegate it to its fields if it wants to. I realize there's no single solution and it of course depends on context, but I would really like to hear some input about other important differences between the two styles, and under what circumstances should I prefer either of them - because right now, when I try to design some code, I feel like I'm just not using the arguments to decide one way or the other. | I think the Law of Demeter provides an important guideline in this (with its advantages and disadvantages, which, as usual, should be measured on a per case basis). The advantage of following the Law of Demeter is that the resulting software tends to be more maintainable and adaptable. Since objects are less dependent on the internal structure of other objects, object containers can be changed without reworking their callers. A disadvantage of the Law of Demeter is that it sometimes requires writing a large number of small "wrapper" methods to propagate method calls to the components. Furthermore, a class's interface can become bulky as it hosts methods for contained classes, resulting in a class without a cohesive interface. But this might also be a sign of bad OO design. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/48419",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/8331/"
]
} |
48,421 | Any preferences for Asp.Net programmers on how to document their code? I read XML with Sandcastle is a good way to go. What do you use? | I think the Law of Demeter provides an important guideline in this (with its advantages and disadvantages, which, as usual, should be measured on a per case basis). The advantage of following the Law of Demeter is that the resulting software tends to be more maintainable and adaptable. Since objects are less dependent on the internal structure of other objects, object containers can be changed without reworking their callers. A disadvantage of the Law of Demeter is that it sometimes requires writing a large number of small "wrapper" methods to propagate method calls to the components. Furthermore, a class's interface can become bulky as it hosts methods for contained classes, resulting in a class without a cohesive interface. But this might also be a sign of bad OO design. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/48421",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/14954/"
]
} |
48,562 | The senior dev in our shop insists that whenever code is modified, the programmer responsible should add an inline comment stating what he did. These comments usually look like // YYYY-MM-DD <User ID> Added this IF block per bug 1234. We use TFS for revision control, and it seems to me that comments of this sort are much more appropriate as check-in notes rather than inline noise. TFS even allows you to associate a check-in with one or more bugs. Some of our older, often-modified class files look like they have a comment-to-LOC ratio approaching 1:1. To my eyes, these comments make the code harder to read and add zero value. Is this a standard (or at least common) practice in other shops? | I usually consider such comments a bad practice and I think this kind of information belongs to the SCM commit logs. It just makes the code harder to read in most cases. However , I still often do something like this for specific types of edits. Case 1 - Tasks If you use an IDE like Eclipse, Netbeans, Visual Studio (or have some way of doing text searches on your codebase with anything else), maybe your team uses some specific "comment tags" or "task tags". In which case this can be useful. I would from time to time, when reviewing code, add something like the following: // TOREVIEW: [2010-12-09 haylem] marking this for review because blablabla or: // FIXME: [2010-12-09 haylem] marking this for review because blablabla I use different custom task tags that I can see in Eclipse in the task view for this, because having something in the commit logs is a good thing but not enough when you have an executive asking you in a review meeting why bugfix XY was completely forgotten and slipped through. So on urgent matters or really questionable pieces of code, this serves as an additional reminder (but usually I'll keep the comment short and check the commit logs because THAT's what the reminder is here for, so I don't clutter the code too much). Case 2 - 3rd-Party Libs' Patches If my product needs to package a 3rd party piece of code as source (or library, but re-built from source) because it needed to be patched for some reason, we document the patch in a separate document where we list those "caveats" for future reference, and the source code will usually contain a comment similar to: // [PATCH_START:product_name]
// ... real code here ...
// [PATCH_END:product_name] Case 3 - Non-Obvious Fixes This one is a bit more controversial and closer to what your senior dev is asking for. In the product I work on at the moment, we sometimes (definitely not a common thing) have a comment like: // BUGFIX: [2010-12-09 haylem] fix for BUG_ID-XYZ We only do this if the bugfix is non-obvious and the code reads abnormally. This can be the case for browser quirks for instance, or obscure CSS fixes that you need to implement only because there's a document bug in a product. So in general we'd link it to our internal issue repository, which will then contain the detailed reasoning behind the bugfix and pointers to the documentation of the external product's bug (say, a security advisory for a well known Internet Explorer 6 defect, or something like that). But as mentioned, it's quite rare. And thanks to the task tags, we can regularly run through these and check if these weird fixes still make sense or can be phased out (for instance, if we dropped support for the buggy product causing the bug in the first place). This just in: A real life example In some cases, it's better than nothing :) I just came across a huge statistical computation class in my codebase, where the header comment was in the form of a changelog with the usual yadda yadda: reviewer, date, bug ID. At first I thought of scrapping but I noticed the bug IDs did not only not match the convention of our current issue tracker but neither did they match the one of the tracker used before I joined the company. So I tried to read through the code and get an understanding of what the class was doing (not being a statistician) and also tried to dig up these defect reports. As it happens they were fairly important and would have maed the life of the next guy to edit the file without knowing about them quite horrible, as it dealt with minor precision issues and special cases based on very specific requirements emitted by the originating customer back then. Bottom line, if these had not been in there, I wouldn't have known. If they hadn't been in there AND I had had a better understanding of the class, I would have noticed that some computations were off and broken them by "fixing" them. Sometimes it's hard to keep track of very old requirements like these. In the end what I did was still remove the header, but after sneaking in a block comment before each incriminating function describing why these "weird" computations as they are specific requests. So in that case I still considered these a bad practice, but boy was I happy the original dev did at least put them in! Would have been better to comment the code clearly instead, but I guess that was better than nothing. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/48562",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/11211/"
]
} |
48,635 | Since this site is read by a global audience of programmers, I want to know if people generally agree that the vast majority of software innovation - languages, OS, tools, methodologies, books, etc. - still originates from the USA, Canada, and the EU. I can think of a few exceptions, e.g. Nginx webserver from Russia and the Ruby language from Japan, but overwhelmingly, the software I use and encounter daily is from North America and the EU. Why? Is history and historical momentum (computing having started in USA and Europe) still driving the industry? And/or, is some nebulous (or real) cultural difference discouraging software innovation abroad? Or are those of us in the West simply ignorant of real software innovation going on in Asia, South America, Eastern Europe, etc.? When, if ever, might the centers of innovation move out of the West? | As a Japanese person myself, I'll admit that there are a lot of cultural factors that make countries like Japan less competitive in the software industry. One problem is that most Japanese companies devote significantly more resources to marketing than a typical US company would. Anything that doesn't produce immediate value gets shot down by managers, especially nowadays with the "kaizen philosophy" of the 70s and 80s being replaced with a new buzzword, "keihi sakugen", or cost-cutting. Intangible projects like middleware and libraries are particularly scarce and vulnerable to being slashed by myopic managers. A lot of the impressive research, for instance in the fields of computer vision and robotics, tends not to get anywhere because they create extremely elaborate proof-of-concept projects that take up all their time and serve no purpose other than to impress laypeople watching TV. Take Honda's violin-playing robot , for instance, which undoubtedly proves a smaller point than IBM's Jeopardy algorithm , despite taking much longer to build. ( Edit 3: As if to prove my point, Japan is sending a Twittering, talking, emoting humanoid robot into space to talk to the Space Station crew . The EU or US would be just as happy with a text-to-speech RSS/Twitter feed reader with maybe :) and :( screen icons to indicate emotion and >:| to indicate a robot apocalypse.) They also don't seem to embrace the concept of code reuse; unless it's a packaged platform, most Japanese programmers I've seen tend to reinvent the wheel quite often. Given proprietary software and a reusable alternative, they'll usually take the proprietary option. They also aren't very keen on standards or open protocols. Take Sony in the 1990s for instance, before Howard Stringer took over. Japanese companies are also stingy about intellectual property, which you'll notice if you've ever tried to find Japanese music on YouTube -- rather than opting for ad income, most Japanese publishers just disable the offending video. Heck, when I was 14, I reinvented bucket sort thinking I'd stumbled upon something new, and my parents got completely upset with me when I insisted that patenting sorting algorithms isn't a good idea. This attitude is completely ingrained in Japanese culture. Many, if not most, will go so far as to censor the names of other products or other people, even when there's nothing negative being said, and even though there's no law that necessitates this. The language barrier is also an issue. Most Japanese people speak a tiny bit of broken Engrish, but most of the programming community's content is in rather difficult English -- so naturally they have less information to keep up to date with or to make good entrepreneurial decisions with. The English education in Japan is notoriously ineffective, with constant calls for reform generally leading to even worse curricula. Edit 1: Forgot to mention, the Japanese value seniority, so most people of authority are in their 50s, 60s, even 70s -- and most of them hardly know how to use a mouse. One positive thing I have to say though is that in a sense most Japanese products are very user-centric, so Japanese UIs, aside from being horribly non-standard, are quite intuitive and usable. Nintendo's work is a good example of this, though even most freeware tends to be quite good in this regard. Edit 2: In general, the Japanese have no faith in software. They'd rather have more hardware than more software. Given a choice between buying an iPhone or buying a generic phone and an iPod, they'll usually choose the latter, even if it takes more pocket space and costs a lot more. In a typical Japanese home you might find a fax machine, a printer, a scanner, a few game consoles, a Blu-Ray player atop their PS3, one or two HDTVs, one phone per person, and a lonely laptop collecting dust. As a result, most of my Japanese friends in their 20s and 30s are as computer illiterate as the North Americans or Koreans of my parents' generation. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/48635",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/2026/"
]
} |
48,698 | A lot of blogs and advice on the web seem to suggest that in order to become a great developer, doing just your day job is not enough. For example, you should contribute to open source projects in your spare time, write smartphone apps, etc. In fact a lot of this advice seems to suggest that if you don't love programming enough to do it all day long then you're probably in the wrong career. That doesn't ring true with me. I enjoy my work, but when I come home from the office I'm not in the mood to jump straight back onto the computer and start coding away until bedtime. I only have a certain number of hours free time each day, and I'd rather spend them on other hobbies, seeing friends or going outside than in front of the computer. I do get a kick out of programming, and do hack around outside of work occasionally. I'm committed to my personal development and spend time reading tech blogs and books as a way to keep learning and becoming better. But that doesn't extend so far as to my wanting to use all my spare time for coding. Does this mean I'm not a 'true' software developer at heart? Is it possible to become a good software developer without doing extra outside your job? I'd be very interested to hear what you think. Update: thanks everyone for your comments & answers. A lot of good thoughts and advice! | IMO this attitude comes from people that have horrible, soul sucking jobs, combined with piss-poor time management skills. If you're basically typing web forms all day, go out and get a more challenging job, or start your own. Here's the thing. A concert musician (cellist/pianist/whatever), will practice at most 6 hours per day. Most only practice a few hours per day. at the highest levels People say program more because you learn more, but that's a smokescreen. 8 hours per day is plenty . Progress is NOT linear. It's logarithmic: The only reason that a musician might practice longer than 3 hours, is that they need to squeeze out the extra 1% that those hours gives them. If you think that applies to you, re-solving a problem CS solved 2 decades ago, then you have a prima-donna complex to boot. I've worked in pressure cooker companies before, and trust me, the actual amount of work that those guys get done isn't any better than a company like 37signals that places constraints on the amount of work: http://37signals.com/svn/posts/996-why-i-love-working-with-family-people What ends up happening is that sure, you may be in front of a computer for 10-12 hours, and in the office for 2 more, but that doesn't include the 90 minute lunch you took, the 2 hours you spent browsing discussion forums, and the hour break you had to play one of the many games laid out in the office (foosball, pool, yada...). Look back at that graph. Now back to me. Your mind actually has the opportunity to expand much more if you engage it in some other activity: Learn to play an instrument . Learn a foreign language . Better yet get out and get some exercise, and connect with real live people . On the logarithmic nature of productivity: In the renowned 1993 study of young
violinists, performance researcher
Anders Ericsson found that the best
ones all practiced the same way: in
the morning, in three increments of no
more than 90 minutes each, with a
break between each one. Ericcson found
the same pattern among other
musicians, athletes, chess players and
writers. For Real Productivity, Less is Truly More This is actually a well-known principle in the business world, I'm surprised more programmers haven't heard of it. Update: More on the Ericsson study. The whole notion of it taking 10,000 hours / 10 years to become proficient actually comes from the studies done by Ericsson, not from Malcom Gladwell. As we all know, you can have 1 year of experience repeated 10 times... so just having your ass in the seat for 10 years doesn't qualify. What does qualify is what Ericsson calls deliberate practice . He has found this principle to hold true in athletics, music, writing, chess, and mathematics. He further defines deliberate practice as being so effortful, that even at the highest levels you can only put forth about 4 hours per day . Otherwise you will suffer from overtraining or burnout. Again, he recognizes that there are diminishing returns for deliberate practice, up to about 4 hours. On the subject of not having a good/challenging job: Hogwash. Either get a better job, or here's an idea: Make your current job into something it's not , at least right now. One of the best programmers I knew walked into a job as a maintenance programmer on a legacy system that consisted of dozens of programs and hundreds of thousands of lines of code. Most of which had been hacked on over the years so much that you would have to say there wasn't any coherent design to it anymore. This was pretty much a go-nowhere, dead-end job. Management wanted you to keep your head down, and just fix the damn bugs. The good developers were working on the greenfield project. People either came here to sit out their remaining days until they retired, or gain a few years of experience before going on to new application development. Whereas most programmers would complain about the lack of career development, or the opportunity to learn new things, or not having exciting projects to work on, or more generally just bitching about no one enabling them , this guy simply sat down, and went about doing the work that needed to be done. And over the course of 2 years, he had transformed that system from a buggy hell of spaghetti code to something that was a thing of beauty and functioned like a swiss watch. So complete was the transformation, that the VP of the division started paying more & more attention to the existing project, and started questioning the value of the greenfield project. Although he didn't have a title, the operations people went to him as the de-facto leader of the group. When I left, the VP was talking about creating a new role for him as a systems architect... I'm not sure what happened to him after that, but he taught me a couple of very important lessons: Your job is what you make it, and there's interesting problems to be solved everywhere . If you hate writing CRUD screens, solve the problem by automatically generating them. Don't sit around waiting for opportunities to come to you. Chances are they never will. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/48698",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/17505/"
]
} |
48,806 | With the whole CLI, CTS, CLS, etc., not only did they release a powerful platform/infrastructure, but they released all the specs that describe it etc. It supports potentially infinite myriad languages, platforms, etc. This seems like an insane amount of work, even for a behemoth like Microsoft - especially since it turns out they did a damn good job. How long were they working on this before releasing it (.NET 1.0)? | You might find this Wikipedia article to be interesting and informative. Microsoft started development on the .NET Framework in the late 1990s originally under the name of Next Generation Windows Services (NGWS). By late 2000 the first beta versions of .NET 1.0 were released. An old press release for the .NET family alludes to its previous title of Next Generation Windows Services (NGWS). If sarcasm is more your cup of tea, this announcement from The Register is interesting as well. And according to this Wikipedia article on Microsoft codenames, it appears that .NET/NGWS went by the names Lightning and Project 42 . Project Lightning was the original codename for the Common Language Runtime in 1997.[73] The team was based in building 42, hence Project 42. "Next Generation Windows Services" appeared in the earliest press releases about the upcoming platform. Wikipedia links to an interview of Jay Roxe and an article from The Age as evidence for this information. Jay tells us that development had begun in earnest at least by 1997, as that's when he joined the team: OK, well let me give you the history. I joined what is now the .NET Framework team, or the Common Language Runtime team, back in November of 1997. [This was] back when it was called Project Lightning, then it became COM+, then it became Project 42, then we had this nice little re-org that made it Project 21 ? we lost half the team. And so, I wrote things like String and StringBulder, and I wrote the initial implementation, although I did not own it forever, all of the base types like Int [16, 32, and 64], and double, and all of those. I did some of the work on Object and was Dev Lead for the System.IO classes, the globalization, and a bunch of the collections work as well. A blog post by Jason Zander on an unrelated topic gives us the interesting tidbit of information that the "Lightning" codename was chosen by the founder of the CLR team, Mike Toutonghi: The original name of the CLR team (chosen by team founder and former Microsoft Distinguished Engineer Mike Toutonghi) was "Lighting". Larry Sullivan's dev team created an ntsd extension dll to help facilitate the bootstrapping of v1.0. We called it strike.dll (get it? "Lightning Strike"? yeah, I know, ba'dump bum). And James Kovacs's C#/.NET History Lesson fills in a few more of the gaps. This Stack Overflow question is also worth a read, for those interested in history. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/48806",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/14969/"
]
} |
48,811 | For every programming project, managers with past programming experience try to shine when they recommend some design patterns for your project. I like design patterns when they make sense or if you need a scalable solution. I've used Proxies, Observers and Command patterns in a positive way for example, and do so every day. But I'm really hesitant to use say a Factory pattern if there's only one way to create an object, as a factory might make it all easier in the future, but complicates the code and is pure overhead. So, my question is in respect to my future career and my answer to manager types throwing random pattern-names around: Which design patterns did you use, that threw you back overall? Which are the worst design patterns , the ones that you should consider except in the one single situation where they make sense (read: which design patterns are very narrowly defined)? (It's like I was looking for the negative reviews of an overall good product of Amazon to see what bugged people most in using design patterns.) And I'm not talking about Anti-Patterns here, but about Patterns that are usually thought of as "good" patterns. Edit: As some answered, the problem is most often that patterns are not "bad" but "used wrong". If you know patterns, that are often misused or even difficult to use, they would also fit as an answer. | I don't believe in bad patterns, I do believe that patterns can be badly applied ! IMHO the singleton is the most abused and most wrongly applied pattern. People seem to get a singleton disease and start seeing possibilities for singletons everywhere without considering alternatives. IMHO visitor pattern has the most narrow use and almost never will the added complexity be justified. A nice pdf can be gotten here . Really only when you have a data structure that you know is going to be traversed while doing different operations on the data structure without knowing all the ways in advance, give the visitor pattern a fighting chance. It is pretty though :) For this answer I only considered the GOF patterns. I don't know all possible patterns well enough to take them into consideration also. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/48811",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/13768/"
]
} |
48,932 | I described to a colleague why a constructor calling a method can be an antipattern. example (in my rusty C++) class C {
public :
C(int foo);
void setFoo(int foo);
private:
int foo;
}
C::C(int foo) {
setFoo(foo);
}
void C::setFoo(int foo) {
this->foo = foo
} I would like to motivate better this fact through your additional contribute. If you have examples, book references, blog pages, or names of principles, they would be very welcome. Edit: I'm talking in general, but we are coding in python. | You have not specified a language. In C++ a constructor must beware when calling a virtual function, in that the actual function it is calling is the class implementation. If it is a pure virtual method without an implementation, this will be an access violation. A constructor may call non-virtual functions. If your language is Java where functions are generally virtual by default it makes sense that you have to be extra careful. C# seems to handle the situation the way you would expect: you can call virtual methods in constructors and it calls the most final version. So in C# not an anti-pattern. A common reason for calling methods from constructors is that you have multiple constructors that want to call a common "init" method. Note that destructors will have the same issue with virtual methods, thus you cannot have a virtual "cleanup" method that sits outside of your destructor and expect it to get called by the base-class destructor. Java and C# don't have destructors, they have finalizers. I don't know the behaviour with Java. C# appears to handle clean-up correctly with this regard. (Note that although Java and C# have garbage collection, that only manages memory allocation. There is other clean-up that your destructor needs to do that is not releasing memory). | {
"source": [
"https://softwareengineering.stackexchange.com/questions/48932",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/1338/"
]
} |
49,067 | While working on a project for my company, I needed to build functionality that allows users to import/export data to/from our competitor's site. While doing this, I discovered a very serious security exploit that could, in short, perform any script on the competitor's website. My natural feeling is to report the issue to them in the spirit of good-will. Exploiting the issue to gain advantage crossed my mind, but I don't want to go down that path. So my question is, would you report a serious vulnerability to your direct competition, in order to help them? Or would you keep your mouth shut? Is there a better way of going about this, perhaps to gain at least some advantage from the fact that I'm helping them by reporting the issue? Update (Clarification) : Thanks for all your feedback so far, I appreciate it. Would your answers change if I were to add that the competition in question is a behemoth in the market (hundreds of employees in several continents), and my company only started a few weeks ago (three employees)? It goes without saying, they most definitely will not remember us, and if anything, only realize that their site needs work (which is why we entered this market in the first place). This might be one of those moral vs. business toss-ups, but I appreciate all the advice. | Though I'd love to live in a world where it would be perfectly safe to just drop them a note to let them know, I'd suggest involving your legal department first. Realistically, it's entirely possible that however well intentioned your bug report is, someone in the competitor's organization will interpret it as "our competitor just paid one of their employees to hack our site". That perception could create legal or PR issues for both you and your company. Involving your legal department in the notification should help shield everyone from the appearance of impropriety. Of course, that creates the possibility that the legal department concludes that notifying the competitor creates an unacceptable legal risk and tells you just to sit on the information. But that's much better than the alternative that it all blows up in your face. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/49067",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/17610/"
]
} |
49,232 | Are you aware of, or have you devised, any practical, simple-to-learn "in-head" algorithms that let humans generate (somewhat "true") random numbers? By "in-head" I mean.. preferably without any external tools or devices. Also, a high output (many random numbers per minute) is desirable. Asked this on SO but it didn't get much interest. Maybe this is better suited for programmers. | Here is an algorithm from George Marsaglia : Choose a 2-digit number, say 23, your "seed". Form a new 2-digit number: the 10's
digit plus 6 times the units digit. The example sequence is 23 --> 20 -->
02 --> 12 --> 13 --> 19 --> 55 --> 35
--> ... and its period is the order of the
multiplier, 6, in the group of
residues relatively prime to the
modulus, 10. (59 in this case). The "random digits" are the units
digits of the 2-digit numbers, ie,
3,0,2,2,3,9,5,... the sequence mod
10. The arithmetic is simple enough to carry out in your head. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/49232",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/8854/"
]
} |
49,268 | There are tons of resources on the web referring to and listing code smells. However, I've never seen information on architectural smells . Is this defined somewhere, and is there a list available? Has any formal research been done into architecture defects, and their impact on project speed, defects, and the like? Edit: I wasn't really looking for a list in the answers, but documentation (on the web or in a book) about architecture smells. | Multitier Architecture When you have Layers on Layers on Layers on Layers... you see my point here, in your application. I call it Over Layered Architecture Over abstraction in such way that you get lost in the code. Futuristic Architecture This happens when the solution is too futuristic. In reality no one can predict new requirements. Therefore most of the futuristic implementation is just waste of time and resource. Technology Enthusiastic Architecture The architect liked the new technology and he put in production. Without knowing if it was proven before. Over Kill Architecture A simple problem was solved by exponential factor of architecture skills and technology. Cloud Architecture I call it cloud architecture since architecture has no connection to the really. It’s just some nice Visio diagrams. The total lack of the opposite is also true. Here is link of Top Ten Software Architecture Mistakes . | {
"source": [
"https://softwareengineering.stackexchange.com/questions/49268",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/4422/"
]
} |
49,326 | If I create a login for an app that has middle to low security risk (in other words, its not a banking app or anything), is it acceptable for me to verify a password entered by the user by just saying something like: if(enteredPassword == verifiedPassword)
SendToRestrictedArea();
else
DisplayPasswordUnknownMessage(); It seems to easy to be effective, but I certainly would not mind if that was all that was required. Is a simple check on username/password combo enough? Update: The particular project happens to be a web service, the verification is entirely server side, and it is not open-source. Does the domain change how you would deal with this? | That would suggest, that you're keeping passwords in open text, which is a no-no, even in low security scenarios. You should rather have: if(hash(enteredPassword) == storedHash) You can use simple hash as for example MD5 | {
"source": [
"https://softwareengineering.stackexchange.com/questions/49326",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/3792/"
]
} |
49,341 | I'm wondering why some people release software as freeware, yet they don't release the source code. Why is that? I can think of some reasons, yet most of them don't make very much sense. Why would you want to keep the source closed but let the program be freely available (free of charge, not free as in freedom)? | Hmm, what comes to my mind is Because you want to retain some measure of control over the product Because you want to reserve the possibility / right to charge for the product in the future Because you're ashamed of your source code Because you want to make sure you are credited for the product, and it doesn't get stolen and re-used in other projects (of which there is always a risk when you publish the code) | {
"source": [
"https://softwareengineering.stackexchange.com/questions/49341",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/12750/"
]
} |
49,379 | In a previous question of mine on Stack Overflow , FredOverflow mentioned in the comments: Note that patterns do not magically improve the quality of your code. and Any measure of quality you can imagine. Patterns are not a panacea. I once wrote a Tetris game with about 100 classes that incorporated all the patterns I knew at the time. Why use a simple if/else if you can use a pattern? OO is good, and patterns are even better, right? No, it was a terrible, over-engineered piece of crap. I am quite confused by these comments: I know design patterns help to make code reusable and readable, but when should I use use design patterns and perhaps more importantly, when should I avoid getting carried away with them? | KISS first, patterns later, maybe much later.
A pattern is a state of mind, mostly. Don't ever try to force your code into a specific pattern, rather notice which patterns start to crystalise out of your code and help them along a bit. Deciding "ok, I'm going to write a program that does X using pattern Y" is a recipe for disaster. It might work for hello world class programs fit for demonstrating the code constructs for patterns, but not much more. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/49379",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/13594/"
]
} |
49,488 | Today, one can find a framework for just about any language, to suit just about any project. Most modern frameworks are fairly robust (generally speaking), with hour upon hour of testing, peer reviewed code, and great extensibility. However, I think there is a downside to ANY framework in that programmers, as a community, may become so reliant upon their chosen frameworks that they no longer understand the underlying workings, or in the case of newer programmers, never learn the underlying workings to begin with. It is easy to become specialized to a degree that you are no longer a 'PHP programmer' (for example), but a "Drupal programmer", to the exclusion of anything else. Who cares, right? We have the framework! We don't need to know how to "do it by hand"! Right? The result of this loss of basic skills (sometimes to the extent that programmers who don't use frameworks are viewed as "outdated") is that it becomes common practice to use a framework where it is not required or appropriate. The features the framework facilitates wind up confused with what the base language is capable of. Developers start using frameworks to accomplish even the most basic of tasks, so that what once was considered a rudimentary process now involves large libraries with their own quirks, bugs, and dependencies. What was once accomplished in 20 lines is now accomplished by including a 20,000 line framework AND writing 20 lines to use the framework. Conversely, one does not want to reinvent the wheel. If I'm writing code to accomplish some basic, common little task, I might feel like I am wasting my time when I know that framework XYZ offers all the features I am after, and a whole lot more. The "whole lot more" part still has me worried, but it doesn't seem that many even consider it anymore. There has to be a good metric to determine when it is appropriate to use a framework. What do you consider the threshold to be, how do you decide when to use a framework, or, when not. | "There has to be a good metric to
determine when it is appropriate to
use a framework." Not really. If there were good metrics for determining appropriate use of any technology, you wouldn't see language, editor, and methodology holy wars. The groups I've worked with all do the same thing - make a guess at costs and benefits, choose the most productive route, and hope they're right. It's not terribly scientific - one part intuition, three parts experience, one part susceptibility to marketing, one part cunning, and five parts rank opinion. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/49488",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/17731/"
]
} |
49,550 | Which hashing algorithm is best for uniqueness and speed? Example (good) uses include hash dictionaries. I know there are things like SHA-256 and such, but these algorithms are designed to be secure , which usually means they are slower than algorithms that are less unique . I want a hash algorithm designed to be fast, yet remain fairly unique to avoid collisions. | I tested some different algorithms, measuring speed and number of collisions. I used three different key sets: A list of 216,553 English words archive (in lowercase) The numbers "1" to "216553" (think ZIP codes, and how a poor hash took down msn.com archive ) 216,553 "random" (i.e. type 4 uuid ) GUIDs For each corpus, the number of collisions and the average time spent hashing was recorded. I tested: DJB2 DJB2a (variant using xor rather than + ) FNV-1 (32-bit) FNV-1a (32-bit) SDBM CRC32 Murmur2 (32-bit) SuperFastHash Results Each result contains the average hash time, and the number of collisions Hash Lowercase Random UUID Numbers
============= ============= =========== ==============
Murmur 145 ns 259 ns 92 ns
6 collis 5 collis 0 collis
FNV-1a 152 ns 504 ns 86 ns
4 collis 4 collis 0 collis
FNV-1 184 ns 730 ns 92 ns
1 collis 5 collis 0 collis▪
DBJ2a 158 ns 443 ns 91 ns
5 collis 6 collis 0 collis▪▪▪
DJB2 156 ns 437 ns 93 ns
7 collis 6 collis 0 collis▪▪▪
SDBM 148 ns 484 ns 90 ns
4 collis 6 collis 0 collis**
SuperFastHash 164 ns 344 ns 118 ns
85 collis 4 collis 18742 collis
CRC32 250 ns 946 ns 130 ns
2 collis 0 collis 0 collis
LoseLose 338 ns - -
215178 collis Notes : The LoseLose algorithm (where hash = hash+character) is truly awful . Everything collides into the same 1,375 buckets SuperFastHash is fast, with things looking pretty scattered; by my goodness the number collisions. I'm hoping the guy who ported it got something wrong; it's pretty bad CRC32 is pretty good . Slower, and a 1k lookup table Do collisions actually happen? Yes. I started writing my test program to see if hash collisions actually happen - and are not just a theoretical construct. They do indeed happen: FNV-1 collisions creamwove collides with quists FNV-1a collisions costarring collides with liquid declinate collides with macallums altarage collides with zinke altarages collides with zinkes Murmur2 collisions cataract collides with periti roquette collides with skivie shawl collides with stormbound dowlases collides with tramontane cricketings collides with twanger longans collides with whigs DJB2 collisions hetairas collides with mentioner heliotropes collides with neurospora depravement collides with serafins stylist collides with subgenera joyful collides with synaphea redescribed collides with urites dram collides with vivency DJB2a collisions haggadot collides with loathsomenesses adorablenesses collides with rentability playwright collides with snush playwrighting collides with snushing treponematoses collides with waterbeds CRC32 collisions codding collides with gnu exhibiters collides with schlager SuperFastHash collisions dahabiah collides with drapability encharm collides with enclave grahams collides with gramary ...snip 79 collisions... night collides with vigil nights collides with vigils finks collides with vinic Randomnessification The other subjective measure is how randomly distributed the hashes are. Mapping the resulting HashTables shows how evenly the data is distributed. All the hash functions show good distribution when mapping the table linearly: Or as a Hilbert Map ( XKCD is always relevant ): Except when hashing number strings ( "1" , "2" , ..., "216553" ) (for example, zip codes ), where patterns begin to emerge in most of the hashing algorithms: SDBM : DJB2a : FNV-1 : All except FNV-1a , which still look pretty random to me: In fact, Murmur2 seems to have even better randomness with Numbers than FNV-1a : When I look at the FNV-1a "number" map, I think I see subtle vertical patterns. With Murmur I see no patterns at all. What do you think? The extra * in the table denotes how bad the randomness is. With FNV-1a being the best, and DJB2x being the worst: Murmur2: .
FNV-1a: .
FNV-1: ▪
DJB2: ▪▪
DJB2a: ▪▪
SDBM: ▪▪▪
SuperFastHash: .
CRC: ▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪
Loselose: ▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪
▪
▪▪▪▪▪▪▪▪▪▪▪▪▪
▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪
▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪ I originally wrote this program to decide if I even had to worry about collisions: I do. And then it turned into making sure that the hash functions were sufficiently random. FNV-1a algorithm The FNV1 hash comes in variants that return 32, 64, 128, 256, 512 and 1024 bit hashes. The FNV-1a algorithm is: hash = FNV_offset_basis
for each octetOfData to be hashed
hash = hash xor octetOfData
hash = hash * FNV_prime
return hash Where the constants FNV_offset_basis and FNV_prime depend on the return hash size you want: Hash Size
===========
32-bit
prime: 2^24 + 2^8 + 0x93 = 16777619
offset: 2166136261
64-bit
prime: 2^40 + 2^8 + 0xb3 = 1099511628211
offset: 14695981039346656037
128-bit
prime: 2^88 + 2^8 + 0x3b = 309485009821345068724781371
offset: 144066263297769815596495629667062367629
256-bit
prime: 2^168 + 2^8 + 0x63 = 374144419156711147060143317175368453031918731002211
offset: 100029257958052580907070968620625704837092796014241193945225284501741471925557
512-bit
prime: 2^344 + 2^8 + 0x57 = 35835915874844867368919076489095108449946327955754392558399825615420669938882575126094039892345713852759
offset: 9659303129496669498009435400716310466090418745672637896108374329434462657994582932197716438449813051892206539805784495328239340083876191928701583869517785
1024-bit
prime: 2^680 + 2^8 + 0x8d = 5016456510113118655434598811035278955030765345404790744303017523831112055108147451509157692220295382716162651878526895249385292291816524375083746691371804094271873160484737966720260389217684476157468082573
offset: 1419779506494762106872207064140321832088062279544193396087847491461758272325229673230371772250864096521202355549365628174669108571814760471015076148029755969804077320157692458563003215304957150157403644460363550505412711285966361610267868082893823963790439336411086884584107735010676915 See the main FNV page for details. All my results are with the 32-bit variant. FNV-1 better than FNV-1a? No. FNV-1a is all around better. There was more collisions with FNV-1a when using the English word corpus: Hash Word Collisions
====== ===============
FNV-1 1
FNV-1a 4 Now compare lowercase and uppercase: Hash lowercase word Collisions UPPERCASE word collisions
====== ========================= =========================
FNV-1 1 9
FNV-1a 4 11 In this case FNV-1a isn't "400%" worse than FN-1, only 20% worse. I think the more important takeaway is that there are two classes of algorithms when it comes to collisions: collisions rare : FNV-1, FNV-1a, DJB2, DJB2a, SDBM collisions common : SuperFastHash, Loselose And then there's the how evenly distributed the hashes are: outstanding distribution: Murmur2, FNV-1a, SuperFastHas excellent distribution: FNV-1 good distribution: SDBM, DJB2, DJB2a horrible distribution: Loselose Update Murmur? Sure, why not Update @whatshisname wondered how a CRC32 would perform, added numbers to the table. CRC32 is pretty good . Few collisions, but slower, and the overhead of a 1k lookup table. Snip all erroneous stuff about CRC distribution - my bad Up until today I was going to use FNV-1a as my de facto hash-table hashing algorithm. But now I'm switching to Murmur2: Faster Better randomnessification of all classes of input And I really, really hope there's something wrong with the SuperFastHash algorithm I found ; it's too bad to be as popular as it is. Update: From the MurmurHash3 homepage on Google : (1) - SuperFastHash has very poor collision properties, which have been documented elsewhere. So I guess it's not just me. Update: I realized why Murmur is faster than the others. MurmurHash2 operates on four bytes at a time. Most algorithms are byte by byte : for each octet in Key
AddTheOctetToTheHash This means that as keys get longer Murmur gets its chance to shine. Update GUIDs are designed to be unique, not random A timely post by Raymond Chen reiterates the fact that "random" GUIDs are not meant to be used for their randomness. They, or a subset of them, are unsuitable as a hash key: Even the Version 4 GUID algorithm is not guaranteed to be unpredictable, because the algorithm does not specify the quality of the random number generator. The Wikipedia article for GUID contains primary research which suggests that future and previous GUIDs can be predicted based on knowledge of the random number generator state, since the generator is not cryptographically strong. Randomess is not the same as collision avoidance; which is why it would be a mistake to try to invent your own "hashing" algorithm by taking some subset of a "random" guid: int HashKeyFromGuid(Guid type4uuid)
{
//A "4" is put somewhere in the GUID.
//I can't remember exactly where, but it doesn't matter for
//the illustrative purposes of this pseudocode
int guidVersion = ((type4uuid.D3 & 0x0f00) >> 8);
Assert(guidVersion == 4);
return (int)GetFirstFourBytesOfGuid(type4uuid);
} Note : Again, I put "random GUID" in quotes, because it's the "random" variant of GUIDs. A more accurate description would be Type 4 UUID . But nobody knows what type 4, or types 1, 3 and 5 are. So it's just easier to call them "random" GUIDs. All English Words mirrors https://web.archive.org/web/20070221060514/http://www.sitopreferito.it/html/all_english_words.html https://drive.google.com/file/d/0B3BLwu7Vb2U-dEw1VkUxc3U4SG8/view?usp=sharing | {
"source": [
"https://softwareengineering.stackexchange.com/questions/49550",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/483/"
]
} |
49,572 | I am writing a set of junit test classes in Java.
There are several constants, for example strings that I will need in different test classes.
I am thinking about an interface that defines them and every test class would implement it. The benefits I see there are: easy access to constants: MY_CONSTANT instead of ThatClass.MY_CONSTANT each constant defined only once Is this approach rather a good or bad practice? I feel like doing so is a little like abusing the concept of interfaces. You can answer generally about interfaces/constants, but also about unit tests if there is something special about it. | Joshua Bloch advises against this in his book titled Effective Java : That a class uses some constants internally is an implementation detail. Implementing a constant interface causes this implementation detail to leak into the classes' exported API. It is of no consequence to the users of a class that the class implements a constant interface. In fact, it may even confuse them. Worse, it represents a commitment: if in a future release the class is modified so that it no longer needs to use the constants, it still must implement the interface to ensure binary compatibility. You can get the same effect with a normal class that defines the constants, and then use import static com.example.Constants.*; | {
"source": [
"https://softwareengineering.stackexchange.com/questions/49572",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/14928/"
]
} |
49,741 | I am going to start a little side project very soon, but this time i want to do not just the little UML domain model and case diagrams i often do before programming, i thought about making a full functional specification. Is there anybody that has experience writing functional specifications that could recommend me what i need to add to it? How would be the best way to start preparing it?
Here i will write down the topics that i think are more relevant: Purpose Functional Overview Context Diagram Critical Project Success Factors Scope (In & Out) Assumptions Actors (Data Sources, System Actors) Use Case Diagram Process Flow Diagram Activity Diagram Security Requirements Performance Requirements Special Requirements Business Rules Domain Model (Data model) Flow Scenarios (Success, alternate…) Time Schedule (Task Management) Goals System Requirements Expected Expenses What do you think about those topics? Shall i add something else? or maybe remove something? I rode every single answer, and i would like to thank all of you for the useful information. I am doing this side project for a company, and they spect from me a constant flow of communication and i will need to explain why i do every single thing, because i will have to administer the resources they will give to me.
This will be my first func spec and as i said i want it to be useful, not just big and useless.
I think this is something that has to be done, but i want to do it in the way that will be more useful for me and my team. Its bad that i we dont have a manager, so thats why i also need to take care about some administrative tasks... Regarding to the agile programming, i think this is 100% compatible with the agile aproach. I am an Agile programmer my self and i honestly fell more confident when someone already did the thinking for me. I still Junnior, but i worked before as a Tapestry web developer in other projects, were the organization was a total chaos. I dont agree i am doing a waterfall aproach, i think i am just trying to define some boundaries that will make the project being easier when the development starts. | What you suggest is fine from the point of view of the Systems Engineering purists. (There will be a few Agile devotees who think its all way overt the top... and you should just get out and do stuff with the usual reviews, etc). However, you need to take into account what you are doing, and who you are doing it for. Doing a project for yourself is different to doing it for somebody else - for money. When you work for somebody else (either in a company or on contract) the ONLY means of communication are speaking, and writing. (Ultimately there will be a product or result which can be assessed.) The whole point of a specification is to try and reduce the cost of making fixes and changes that come later. You may have seen the graphs showing cost of making fixes at different stages of a project, it goes something like this: A fix made to a dumb idea costs $1 A fix made when the dumb idea made it into a specification (that has to be updated) costs $10 A fix made when you have built a prototype costs $100 A fix made about the time you are doing an acceptance before shipping costs $1000 A fix made after you have shipped and pissed off your customers costs $10000. So what you write in a specification is pretty important. To argue that you should have no specification at all is naive, foolish, and probably dangerous. One of the biggest problems you have in writing a specification is knowing when you have gone too far. This varies depending on the size of the project. For example, a project taking 1-2 people about a year should have somewhere between about 2 and 4 WEEKS in total spent on specification... which includes the investigation of feasibility... the spec to be written by the people who actually do the work not some high falutin analyst type who does not know the gory details. A project taking 10 people 2 years needs a lot longer. Now for some comments on your various points: Purpose YES, write this. Keep it to 1-2 paragraphs, 1/2 page max. Functional Overview MAYBE. Only if it adds value to everything else. Context Diagram ESSENTIAL. Shows ALL inputs and outputs. Shows context. You can (and should) spend a reasonable amount of time on this. Critical Project Success Factors MAYBE. Surely if the project meets the requirements its a success. I think this is not really needed. Scope (In & Out) NO. Your context diagram does this. Assumptions YES. Try and keep it brief. Actors (Data Sources, System Actors) MAYBE. This sounds like technical gory bits of design to me which should not be in a FUNCTIONAL specification. Use Case Diagram MAYBE. Put this (these) in an appendix. Explain with words. Try to keep these to a small number. I have seen suggestions that a project should have not more than 8 Use Cases explained in detail. Don't cover all the "unahppy" paths or you will never finish. It is very rare for any piece of s/w to have a single Use Case / Use Case Diagram. Process Flow Diagram MAYBE. Only if it adds significant value, else you are wasting your time. Activity Diagram MAYBE. Only if it adds significant value, else you are wasting your time. Security Requirements YES - if relevant. Performance Requirements YES - mandatory. Must say what the thing must do (and with what level of performance). Special Requirements MAYBE - if there is anything special. Business Rules MAYBE - if useful. Domain Model (Data model) MAYBE - if useful. Flow Scenarios (Success, alternate…) MAYBE - if useful. Time Schedule (Task Management) NO. This is not what should be in a spec. Its about schedule, planning, etc. Goals MAYBE. Goals are not requirements, they are vague fluffy wouldn't-it-be-nice things, which serve to muddy the waters. Try and avoid. System Requirements YES. Essential. Says what the thing must do. Expected Expenses NO. Part of planning and management, not the requirements of the thing you are making. Explanation: I've been writing specifications for products, software, and complex systems for over 15 years. All commercial stuff. Mostly commercially successful and made a lot of money for various employers. Including specification for Agile s/w development where you still need a spec before you leap into the development. The ACTUAL development can be done in whatever process you want, but in the end you must have 3 things for success: Know what you want to do. AND WRITE IT DOWN. (Thats a spec.) Do stuff to meet #1 above. Do some level of acceptance testing of the thing against the spec (which is essentially "did you do what you said you would do"). | {
"source": [
"https://softwareengineering.stackexchange.com/questions/49741",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/17839/"
]
} |
49,806 | I am interested in finding out why programmers leave their jobs and if the reasons for leaving have resurfaced in your now job? Is the reason for leaving simply down to remuneration, location, I hate my boss / coworker, lack of recognition or retirement / new career path. Update: I am responsible for a team of programmers and testers and I would like to better understand what could motivate my team to leave, and hopefully try to address such issues. | This blog post will add a lot of value to the discussion: http://widgetsandshit.com/teddziuba/2010/05/why-engineers-hop-jobs.html It comes down to this: top talent has easy time finding jobs. Make sure that you, the employer are competitive in the job market. I am a harsh judge. Please do not judge me nearly as harshly, for I need to eat to sustain my life and thus I always needed a job somewhere. I am sure that my post is subjective, but I tried to answer honestly from my perspective. You see, it is NOT all about what I can do for the company. It is all about WHAT I WANT (and can get). FYI, I am male, not married, without kids. [In no particular order] Reasons I have left: On my first day I was greeted: "Welcome to Hell" by a co-worker. Company is struggling financially Many broken promises Overqualified for my current position and cannot move within the same company. Bored as hell at my daily job. Working with / for idiots. Management betting big on sub-par outsourcing and having their ass handed to them. Management not understanding software. Working in an industry that I am not passionate about. Consistently shipping crappy products. So far I would never buy what I have been producing, even if I was working for a large firm with a lot of capital to spend. Corporate bullshit. Work location in the middle of nowhere. Depressing-looking work building; awful food in cafeteria. Cheap/flaring office furnish and equipment. Uninteresting coworkers / personality clashes. Too much gossip / co-workers having no balls to stand up for what they believe. Seeing no sparks in anybody's eyes. "Golden children" / "ass kissers". Dress code, too many meetings, having to be at work by 9, six sigma training, seeing corporate waste. Not being able to grow professionally / take a class after work. Not having enough equipment to do the work fast, work place being too noisy. Too many meetings. Fixed deadlines. Not enough vacation / sick days. Feeling that I am not getting paid my market value. Feeling like I make significantly less than some other assholes at the same company who do not deserve it (I tend not to envy when pay is justified). Not clicking with manager / project manager / co-worker(s). Being a minority in the democrats vs republicans debate, encouraged at work. Non-proper conversations regarding gender/race/sexual preferences during lunch. Seeing brain drain and the company not realizing that it is happening and why it is happening. They score too low on Joel's test. "Dead sea effect" : http://it.slashdot.org/story/08/04/12/2241216/The-Dead-Sea-Effect-In-the-IT-Workplace The boss's dumb son works here too, but he is unfirable. Other types of unfirable people; jerks. I worked at Wall Street and I had to talk to traders. We helped the stock market crash. Business analysts were above me on a food chain. Employee evaluation that made me feel like shit for a week or two, even if my compensation was ok. Anything every slightly negative that goes on record into HR's files cannot be good for me. I would much rather prefer a tough 1:1 conversation. Negotiating a raise is hard and unpleasant. Going to interviews is fun, increases wealth, and makes me feel smart once again. All those fun puzzles and deep technical questions that only tend to come up in interviews, but then my daily work is not nearly as stimulating. Why I have not left yet (in no particular order): It is not that bad (but I will not be here too long). My pay is ok I might not like everyone, but my manager and some of my co-workers are awesome. I might not work for NASA, but I am still challenged, am learning, and there are some smart people around. I like most people's sense of humor. It takes me no more than 45 minutes to get to work using public transport. The company's revenue looks ok, so no need to fear layoffs or other draconian cost-cutting measures in the next 6 months. I have not been here long enough; if I leave now, then I will look like a job-hopper. I want to wait till February, when I will be told how well they think I performed as well as get my raise and bonus :) If I leave now, my resume will not look that good. They promise that after 1 year of being here, they will finally give me a half-decent project to work on. My benefits look decent, and I have some dental work coming up, so I better do it before I switch jobs (work is hard at the beginning, and health-related stuff better not be a distraction). The economy sucks, so I have to stay here for at least 6 months total. If I get laid off after that, then Obama will take care of me. Joel test score here is above 8 out of 12. After 1 year I am eligible for career development benefits, and I want to take a class. My co-worker is my neighbor, and he drives me to work 5 days per week - score! My partner is finishing up her Master's in 6 months. When she gets a job in 9 months or so, I will reevaluate my situation. I have MSDN license, so all 50 of my relatives get a free copy of Windows XP / MS Word and a flight simulator. I need time to prepare for the job that I really want, and working 45-50 hrs per week does not leave that much time. I have some free time during work day, so that I can invest into my education/projects/ideas I have a family situation / I am in the middle of a divorce / other personal stuff, and I want to take it easy and not try to do too many things at once. I will be starting grad school in 1 year, so it does not make sense to switch a job now. I bought a house and I cannot risk it, at least not until I rent our 3 out of 5 rooms. I have had a large unexpected expense; job hopping is unwise at the moment - I need to replenish savings. I still need to meet / linked in more people and secure a couple of recommendations. Someone at my work thinks that I am not that smart. I cannot leave until I make them eat their wrong first impression. They will send me to a Scrum Master training next month, and that always looks good on a resume. Reasons why I am likely to stay for 5-10 years: I absolutely love it here. I help to cure the deadliest form of cancer or do something useful like that. I do not feel like yet another brick in a wall, but rather feel like I matter. I am compensated well, and have no envy of coworkers. I socialize with my coworkers after work because I want to, not because it is good for networking. People are very cool and get my sense of humor and vice versa. Proper equipment. No performance evaluations, or at least a fair and human-oriented process. I can work 11 am - 7 pm without management thinking that I am a lazy slob. It is quiet here. WE HAVE A FREAKING PING-PONG TABLE (foosball is lame)!!! A pool table would be nice, preferably non-American (pockets are too large). We have a gym, a swimming pool and a sauna. Good benefits An opportunity to learn, to take a class, to work only 30 hours per week and get paid accordingly. At least 1 month of vacation (yes, it is a lot by US standards, but if you come from Europe, it is nothing). My co-workers are smart but normal (as in they do not take the geekiness to far). Drinks and snack are included, and they are healthful because my co-workers do not eat sweets or drink soda. The place provides recycling, and my co-workers can tell paper from plastic from metal from trash. The place is green-conscious (but not green-washed). I would/do buy my own product. I get paid to learn a foreign language on my job. I can practice that or some other foreign language with my coworkers who speak it. I am respected and feel smart. I get things done fast and well because the environment is right. The company is doing well financially. I can tell random people at a bar what I do honestly, and they will think that I am cool. I get enough income from rent, but I still want to work here. Work is located in a lively place, with lots of smart, positive, energetic people around on the street. I can walk to work in 30 minutes. There is good food and entertainment everywhere along this path. I have many friends in the same city / area. I can meet cool people here. I like the climate, and beaches / mountains are not too far away. Unlike Jeff Atwood, my coworkers genuinely like outdoors and nature. Reasons why I am unlikely to stick around for more than 10 years: I want to be my own boss. I want to travel a lot, on my own schedule. I could use 2.5 months of vacation per year (paid or unpaid), and no sane employer will offer that to me. I am not yet sure if I like long commitments. I have not decided 100% what country I want to live in. Things can change quite a bit in one decade. I like change, I like new atmosphere. Life is very dynamic. My goals 10 years from now can be quite different. I prefer small, successful companies / startups. After 10 years they will likely grow into something different. Companies that survived the first 3 years tend to be risk-averse, but a new crazy start-up around the block might be doing something very cool and new. Moving every 10 years can be good in general, and I do not think that being a manager is for me. Without desire for vertical growth, looks like horizontal moves are the only option. Hopefully this helps. Yes, I am a dreamer. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/49806",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/7119/"
]
} |
49,856 | Does an appropriate algorithm really help improve the quality and ultimately the efficiency of a program? Can we still produce a good quality program without the algorithm? Is an appropriate algorithm a MUST in modern programming? | I think this question begs some historical perspective. Back in the "olden days" (of which I am not a personal witness, so this is only my reconstruction of that era - feel free to correct me if you experienced things differently) HW space and performance was nil compared to today's. So everything people wrote then had to be very efficient. Thus they needed to think a lot about and research to invent the best algorithms to achieve the needed space / time performance to get the job done. Another factor in this was that developers were mostly working on what you may call infrastructure : operating systems, protocol stacks, compilers, device drivers, editors etc. All of this is used a lot by a lot of people, so performance really makes a difference. Nowadays we are spoilt having incredible HW with multicore processors and Gigabytes of memory in even a basic laptop (heck, even in a mobile phone). Which naturally means that in many cases, performance - thus algorithm - ceased to be the central issue, and it is more important to provide a solution fast than to provide a fast solution. OTOH we have heaps of frameworks helping us solve problems, and encapsulating a large number of algorithms at the same time. So even when we aren't thinking about algorithms, we may very well be using lots of them in the background. However, there are still areas where performance matters. In these areas you still need to think a lot about your algorithms before writing code. The reason is that the algorithm is the center of the design, determining a lot of data structures and relationships in the surrounding code. And if you find out too late that your algorithm is not scaling well (e.g. it is O(n 3 ) so it looked nice and fast when you tested it on 10 items, but in real life you will have millions), it is very hard, error prone and time consuming to replace it in production code. And micro-optimizations aren't going to help you if the fundamental algorithm is not right for the job. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/49856",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/17027/"
]
} |
50,118 | Assume there is a library that is licensed under GPL. I want to use it is closed source project. I do following: Create small wrapper application around that GPL library that listens to socket, parse messages and call GPL library. Then returns results back. Release it's sources (to comply with GPL) Create client for this wrapper in my main application and don't release sources. I know that this adds huge overhead compared to static/dynamic linking, but I am interested in theoretical way. | Legally , I'd say it would be OK (but I am not a lawyer - consult a lawyer for legal advice). Morally , it's pretty reprehensible. If you don't like the GPL, then the "proper" solution is not to use a GPL library. Edit : To clarify, whatever the legal standing of the GPL with respect to whether dynamic linking is allowed or not, the LGPL was specifically created with the intent of allowing dynamic linking in the case of libraries. So it seems clear to me that by choosing the GPL over the LGPL, the author of the library was doing so explicitly to disallow dynamic linking. Using a technical means to work around a legal restriction that expresses the author's explicit intent for his code is what is reprehensible, in my opinion. For the record, I'm not personally a fan of the GPL (I prefer a more permissive license such as MIT or BSD). However, I am a huge fan of respecting the work of other developers, and if they don't want you link their library with closed-source software, then that's prerogative. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/50118",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/17904/"
]
} |
50,150 | Programming languages can often be described as verbose or terse . From my understanding, a verbose language is easy to read and understand, while a terse language is concise and neat, but more difficult to read. Should there be other things to consider in the definitions? It seems much of the popular programming languages of today are verbose, and these two terms are only used to describe a language as being more or less, relative to another language. How do we determine if a programming language is more verbose/terse over another? Example: Is C# more verbose than Java? | From my understanding, a verbose language is easy to read and understand, while a terse language is concise and neat, but more difficult to read. This is false. Verbose means lots of symbols. Terse means fewer symbols. This has nothing to do with ease of reading or ease of understanding. Some folks find verbose COBOL easy to read, other find it confusing because so many symbols are required to do so little. Some folks find terse I/J/K and APL easy to read because the program is very short. Others find it hard to read because the symbols are obscure. Terse/Verbose has no relationship with easy to read or easy to understand. Should there be other things to consider in the definitions? No. The definitions of terse and verbose are fine. What's important is that these definitions have nothing to do with "easy to read and understand" It seems much of the popular programming languages of today are verbose. Really? How do we determine if a programming language is more verbose/terse over another? Count tokens to get something done. Add 2 TO A GIVING B. 7 tokens b = a + 2; 6 tokens http://dictionary.reference.com/browse/verbose http://dictionary.reference.com/browse/terse | {
"source": [
"https://softwareengineering.stackexchange.com/questions/50150",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/14/"
]
} |
50,372 | Variable names can be written in many ways, but the most common that I'm familiar with are: thisisavariable, this_is_a_variable, and thisIsAVariable. Which of these are preferred, and why? | Some programming languages or frameworks have their conventions about variable naming. I believe that more important than the way you name variables is to be consistent and stick with certain style during a project. This becomes extremely important within a team, where the code must be easily understood at first sight by anyone who reads it. IMHO the rest is a matter of taste. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/50372",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/2210/"
]
} |
50,387 | From my experience of working on several Java based projects, I've seen tons of codes which we call 'dirty'. The unconventional class/method/field naming, wrong way of handling of exceptions, unnecessarily heavy loops and recursion etc. But the code gives the intended results. Though I hate to see dirty code, it's time taking to clean them up and eventually comes the question of "is it worth? it's giving the desired results so what's the point of cleaning?" In team projects, should there be someone specifically to refactor and check for clean code? Or are there situations where the 'dirty' codes fail to give intended results or make the customers unhappy? What I mean by WHY is not the technical reasons, I'm asking about the motive.** We all are familiar with why we should write clean code in technical point of view. But, imagine in college if I write bad code, I get bad marks. So in your industrial environment what motivates you and your team to write clean code? And what de-motivates you and team from writing bad code? It's so easy to write a method in a class to get the information you need without looking around and checking if there's a similar method! Result? 5000 + lines of code with few methods doing almost similar tasks. | It's worth it, and it should be done by the whole team. "When it stinks, change it" works for babies & code (thanks, Kent Beck). Some have said that for a little, short-term project, it's not worth doing. I don't agree. In the first place, we rarely know just how short-term a project is going to be, but in the second place - when it stinks, change it. If it stinks, it's either because you wrote it stinky in the first place - and your team should work on its general coding skills, and refactoring is good training for that - or (more frequently) it has been sloppily patched and repurposed over time. In that case - obviously it's not a very short-term project. If you had the time to stink it up, take the time to change it. Refactoring - keeping your code from stinking - isn't the responsibility of one designated person on your team, and it isn't the responsibility of the person who made it stink. It's your responsibility and the responsibility of everyone on your team. When it stinks, change it. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/50387",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/18030/"
]
} |
50,432 | Here's how I see it. There's machine code and it's all that computers needs in order to run something. Computers don't care about programming languages. It doesn't matter to them whether the machine code comes from Perl, Python or PHP. Programming languages don't serve computers. They serve programmers. Some programming languages run slower than others but that's not necessarily because there is something wrong with them. In many cases, it's because they do more things that programmers would otherwise have to do (i.e. memory management) and by doing these things, they are better in what they are supposed to do - serve programmers. So, is slower performance, of programming languages, really, a bad thing? | I don't think it's automatically bad. Python is slower than C++, but when both are fast enough , Python may be the best choice for the problem at hand even if it's slower . It's always a tradeoff. For small one-off tasks, it's much faster to write a Python script than a C++ app that does the same (the typical example for me would be some kind of batch text processing or walking a directory tree and doing something to the files), and I don't really care whether it takes 10 ms or 1000 ms, even though it's 100x slower, because it may take me half the time to write and test. Of course, it would be nice if Python was as fast as C++, so in that sense I agree with your statement that "slow = bad". But then I rather have a powerful language that runs as fast as I want by not doing some things (say, array bounds checking on raw arrays) as long as it allows me to decide when to make that tradeoff (say, by using std::vector). | {
"source": [
"https://softwareengineering.stackexchange.com/questions/50432",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/11018/"
]
} |
50,442 | So, what do you use? int anInt = (int)aFloat; or int anInt = static_cast<int>(aFloat);
// and its brethren And, more importantly, why? | First, understand that those lines are not equivalent . int* anInt = (int*)aFloat; // is equivalent to
int* anInt = reinterpret_cast<int*>(aFloat); What is happening here is that the programmer is asking the compiler to do whatever it can to make the cast. The difference is important because using static_cast will only ask for a base type that is "safe" to convert to, where reinterpret_cast will convert to anything, possibly by just mapping the wanted memory layout over the memory of the given object. So, as the "filter" is not the same, using a specific cast is more clear and safe than using the C cast. If you rely on the compiler (or runtime implementation if you use dynamic_cast ) to tell you where you did something wrong, by avoid using C cast and reinterepret_cast . Now that this is more clear, there is another thing: static_cast , reinterpret_cast , const_cast and dynamic_cast are easier to search for . And the ultimate point: they are ugly . That's wanted. Potentially buggy code, code smells, obvious "tricks" that might generate bugs, are easier to track when it's associated with ugly look. Bad code should be ugly . That's "by design". And that allows the developer to know where he could have done things better (by totally avoiding casts, if not really needed) and where it's fine but it's "documented" in the code by "marking" it as ugly. A secondary reason for introducing the
new-style cast was that C-style casts
are very hard to spot in a program.
For example, you can't conveniently
search for casts using an ordinary
editor or word processor. This
near-invisibility of C-style casts is
especially unfortunate because they
are so potentially damaging. An ugly
operation should have an ugly
syntactic form. That observation was
part of the reason for choosing the
syntax for the new-style casts. A
further reason was for the new-style
casts to match the template notation,
so that programmers can write their
own casts, especially run-time checked
casts. Maybe, because static_cast is so ugly
and so relatively hard to type, you're
more likely to think twice before
using one? That would be good, because
casts really are mostly avoidable in
modern C++. Source: Bjarne Stroustrup (C++ creator) | {
"source": [
"https://softwareengineering.stackexchange.com/questions/50442",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/3586/"
]
} |
50,576 | I don't have much experience in working in software industry, being self-taught and having participated in open source before deciding to take a job. Now that I work for money, I also have to deal with some unpleasant stuff, which is normal of course. Recently I was assigned to add logging to a large SharePoint project which is written by some programmer who obviously was learning to code on the job. After 2 years of collaboration, the client switched to our company, but the damage was done, and now somehow I need to maintain this code. Not that the code was too hard to read. Despite problems—each project has one class with several copy-pasted methods, enormous if nestings, Systems Hungarian, undisposed connections—it's still readable. However, I found myself absolutely unproductive despite working on something as simple as adding logging. Basically, I just need to go through the code step by step and add some trace calls. However, the idiocy of the code is so annoying that I get tired within 10 minutes of starting . In the beginning, I used to add using constructs, reduce nesting by reversing if 's, rename the variables to readable names—but the project is large, and eventually I gave up. I know this is not the task I should be doing, but at least reducing the mess gave me some kind of psychological reward so I could keep going. Now the trick stopped working, and I still have 60% of my work to do. I started having headaches after work, and I no longer get the feeling of satisfaction I used to get—which would usually allow me to code for 10 hours straight and still feel fresh. This is not just one big rant, for I really do have an actual question: Is there a way to stay productive and not to fight the windmills? Is there some kind of psychological trick to stay focused on the task, instead of thinking “How stupid is that ?” each time I see another clever trick by the previous programmer? The problem with adding logging is that I actually have to understand what the code does, and doing so hurts my brain in an unpleasant fashion. | I'm sorry to tell you, but not all jobs are full of sunshine and glamor. The majority of development tasks involve drudge work like this. Sad, but true. You are tasked with an important job, even if it's boring to the point of watching paint dry. It's important for two reasons: 1. It add much needed logging to a large system so that when something goes wrong you'll have a tool to help you find it. and 2. It gets you familiar with the code base so that if and when something goes wrong you can jump in and fix it. You are basically creating your own safety net here. Glamors, no, but important yes! So, that being said how should you motivate yourself? When I have a mind numbing task at work, I set goals for myself. Finish doing task x by the end of the week. If I make my goal, I reward myself. New restaurant I want to try? Go Friday night if I finish. New movie just came out? See it on the weekend if I finish. I find talking with my supervisor and letting him/her know where I'm at and how I'm progressing keeps me accountable. If I tell them I'll be done by Friday, I feel more inclined to get it done by Friday b/c I told them I would have it done. Keep faith that once you complete this task and you've done it well, on time and on budget that people will notice and when that shinny new project comes along, your name might just be suggested as the one who gets it. :) | {
"source": [
"https://softwareengineering.stackexchange.com/questions/50576",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/3939/"
]
} |
50,616 | How would you manage if you are allocated a team of 5 with, say, 4 incompetent programmers and you are asked to lead? Obviously you can't code for the 4 guys (you can, but that is not a good idea. At least I burned out doing that). Have you come across these kind of situations? Edit: I think I sounded rude by choosing a wrong word (incompetent) to address my problem. To rephrase the question, how do you deal with people who do not complete assigned tasks (for whatever reasons [ranging from incompetence to 'I don't care' stuff])? | Mentor them. I've come across this same situation when consulting and having been put on teams with less than optimal team members (nobody needs a consultant if everything is going great :-/). My manager one time became so frustrated with the other developers, he resorted to getting frustrated and just telling them all the time how they were doing things wrong. Eventually, they shut down completely and gave up trying. Another project was different. I had a manager who was patient and worked with them. Yes, they were sub-par, but they were partly so because they did bad on one project and got chewed out over it, then they lost confidence in themselves and did worse, bringing more chewing out. These were smart guys, they just didn't know how to focus it to be productive. It sounds like you have a relatively high percentage of incompetent team members, which worries me. There are sometimes a couple, but 80% is pretty high. This sounds like they haven't had a good leader to help mentor them and give them opportunities to learn without feeling the hammer all the time (of course, you give no background to them, so I'm assuming that's the problem). It doesn't really matter what the specific problem is, this sounds like a team-wide problem, and you as their new leader have the authority, resources, and power to give them a better learning and work environment than they have been used to. I would suggest listening to them and find out as a team what the problem is and if there is something you can pull out that could explain the situation. Many times, just listening to your team will work magic as that is sometimes rare to find leaders that actually listen. Then, mentor them and create an environment of learning. It may not be that they are incompetent so much as they've not had a good leader that you're now having to clean up after. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/50616",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/17887/"
]
} |
50,755 | These days, I'm investing heavily in data structures and algorithms and trying to solve some programming puzzles. I'm trying to code and solve with Java and Clojure. Am I wasting my time? should I invest more in technologies and frameworks that I already know in order to gain deeper knowledge (the ins and the outs) and be able to code with them more quickly? By studying data structures and algorithms, am I going to become a better programmer or those subjects are only important during college years? | It is entirely possible to spend most/all of your career doing significant, useful work, with only minimal knowledge of algorithms and data-structures. The minimum level of knowledge for algorithms and datastructures, in order to be successful, will require you to: be aware of most of them (including reading up on new ones occasionally as they come out) know where to find good, tested, working implementations be able to compare algorithms and their usefulness be able to correctly copy one from an open-source example to your specific environment, with a small bit of tweaking There is no * maximum * . If you want to, you can take your study to the PhD level and beyond. It's usefulness is directly related to the kind of jobs that you're interested in having, and to which kind of work you find most interesting and rewarding. That said, as a rough (but not absolute) guideline, the more low-level, more resource intensive and less automated the language, framework, and application that you're working on will be, the higher the required skill level when it comes to algorithms, and data-structures. For example, implementing Ukkonen's algorithm in assembly will likely, but not necessarily, mean you'll want a masters' level understanding of the algorithm and data-structures involved. In your specific situation, going from a Java development background to working on the iOs, all other things being equal, expect a slightly higher demand on your general understanding of algorithms and data-structures. You'll want to be able to run efficiently on a device with fewer available resources. Also, expect to add a couple of new categories to your arsenal - most notably, you'll want to know more about memory management. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/50755",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/10097/"
]
} |
50,791 | I'm a hobby programmer. The absence of real world deadlines, customer feedback, or performance reviews leaves me free to daydream about having and implementing The Next Great Idea That Changes the World. Of course I'm aware I probably have a better chance of winning the lottery, but it's fun to imagine knocking out some fully-homebrewed app that destroys the status quo. I know many professional programmers have side projects, some for profit others not. I was wondering on the way to work this morning (non-IT boring work) if having to code for your food tended to dampen the dreaming? Does greater experience leave you jaded and more focused on the projects at hand? Not trying to be a downer, just interested in the mindset of the real software professional :-) | Yes. Those who don't, typically change careers. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/50791",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/9260/"
]
} |
50,831 | The other half of this question: How do Programmers in the East see programmers in the West? The eastern part of the world (India/China/Philippines ) mainly provide outsourcing services to the western world (USA and Europe). Do you have the experience of working with offshore teams? If yes, how was it? Do you hold any generalized ideas or opinions about the programmers from the East (e.g. Are they cooperative, do they deliver on time or do they do quality work?). What are these based on? | Disclaimer: I live in Central Eastern Europe, make your own decision on whether I count as Eastern or Western :-) As such, I worked on projects outsourced to our country from Western Europe, and I experienced doubts from the more Western coworkers and management concerning our abilities, similar to what Indians must experience in such situations. OTOH I have been working with several Indian and some Russian developers on two major projects. The first also involved a component developed entirely by an Indian subcontractor, which was easily the most horrific piece code I have ever had access to (I can't say "the most horrific code I ever read", because upon seeing that the largest single source file measured more than 600 Kbytes in size (or AFAIR about 30K lines), I quickly closed it and could only pray that I may never ever need to touch it. My pray was listened to). The latter (which I am currently working on) has been subcontracted to 3 different companies, some of them applied several Indian programmers. We have been cleaning up the result of that in the past 1,5 years, and there is yet enough work left for the foreseeable future. In my personal life, I lived in India for over 3 months at a previous era of my life, so I probably know more about the country and its inhabitants than an average Westerner. Personally I like Indians a lot. My personal experience has been that the same noticeable cultural differences which exist between Western and Indian people in general, are observable between programmers as well. Indians are usually very diligent in executing whatever concrete task is thrown on them, but not necessarily see or even feel the need to understand the bigger picture. Which can easily result in low quality software. Another potential issue is the culturally ingrained resistance of Indians to say no to any request, as I believe it is considered rude by them. If you go to an Indian grocery shop and ask for blankets / jewelry / shark fins / whatever, the owner will say "yes sir, in a moment", then sends out his boy to some other shop in the neighbourhood to fetch the product and proudly presents it to you. Which is good business practice indeed. However, if the same is applied to subcontracting a SW development project with a fixed impossible schedule, the results may be disastrous. This is just speculation from my part though, I have no concrete evidence on whether or not this is really a factor in outsourcing SW development to India. One prime example of futile diligence in our current project was the implementation of a performance monitoring scheme. The idea was to pass around objects which gather performance statistics. However, the solution turned out to be slowing down the app so much that it was never really used. Nevertheless, its remnants in the code were left there for us to clean up. In practice, this meant passing an extra object parameter to all (about 6000) methods in the code. The guy who did it even added a comment to the Javadoc of each method, noting that the extra parameter was added for performance measurements! Now, I can only marvel at the diligence of that guy, doing his job through all 6000 methods and faithfully inserting those Javadoc comments everywhere. OTOH, a) as noted earlier, the scheme was never used in practice, and I am sure its performance hogging effects could have been detected by an early prototype, making the whole job unnecessary, b) all the Javadoc comments contained the same spelling error, c) such comments do not belong to Javadoc anyway. I don't mean that this all was the poor Indian developers' fault (except the misuse of the Javadoc). IMO it is much more the fault of managers mindlessly contracting out projects without monitoring the results, conducting strict acceptance tests and ensuring the adequate quality of code and documentation. Not to mention hour based payment schemes which surely don't make any subcontractor interested in saving development time. However, I think I would be hard pressed to find developers in the West to undertake similar tasks with the same level of consistency and without complaints. We also have subcontracted testing tasks in this current project to a group of Indian testers. Personally we are only in contact with one of them, so no idea how many they are in total. However, this guy is a gem of a tester, a valuable asset on any project. Apart from being diligent and thorough, he asks a lot of questions to understand the big picture, often tests even more than what was expected, and reports issues found precisely and descriptively. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/50831",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/17887/"
]
} |
50,884 | The other half of this question: How do programmers in the West see programmers in the East? I think it's just as interesting and important to see how programmers in the east view programmers in the west. The eastern part of the world (India/China/Philippines ) is often seen as mainly providing outsourcing services to the western world (USA and Europe). Do you have the experience of working as part of an offshore team? If yes, how was it? Do you hold any generalized ideas or opinions about the programmers from the West (e.g. Are they cooperative, do they deliver on time or do they do quality work?) | Being an Indian, I can speak about India. The issue is about the culture here, the mindset of the people. Since childhood, we are trained to follow the process, trade the safe path, get into high paying professions like engineering, medicine, business administration, etc. Innovation, exploration, entreprenuership is still not so common here. Most people get into IT for money, not because they like coding, or have an inclination towards computing. Due to this lack of interest, most of us end up becoming robots, carrying out instructions without paying attention to the meaning. Most developers never think from the end user perspective, or how would their specific component provide value to the bigger system. You would hardly find innovators here, but you would see excellent workers. The idea of growth is solely in terms of money and designation, knowledge doesnt really matter to most. Leading IT companies also follow the same pattern. They hire freshmen from colleges and train them to become such robots. The sad part is, their pay scale is still better than most other professions and there is no escaping from it. There are hardly any IT companies in here, who look out for real talent. Another important fact is, most of the talented people fly across to the US or other nations where they can apply thier skills and earn much more than their Indian counterparts. So, if you are a developer in India, chances are, you'd end up being a process geek, than a tech rookie. Although things are changing now and we do see a few startups cropping up, but they are still in short supply. Update: So, the points above were my perspective to how programmers in the east are. However, to answer the question, programmers in the west, are generally more result oriented, focused, upfront and more professional. I have always worked with customers/clients from the west and have always found them co-operative, patient, flexible and supportive. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/50884",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/5095/"
]
} |
50,928 | How do people deal with profanity in source code and VCS comments. Keep or delete? What about soft-expletives like WTF or Arrgggh? Is unprofessional, offensive or something to be shrugged off? | It should be gently discouraged ..you cannot possibly know who will get to see the source code over its' lifetime. While it is all part of the job to get frustrated with a particularly complex or old piece of code and want to sound off about it, putting expletives/rants/ASCII art/bad jokes/offensive remarks into the source code is both unprofessional and a bad idea in my experience. Sometimes, the engineer writing the comments is oblivious to the eventual effects his comments could have - here are just some of the issues I've see: A high number of expletives in code released to the public as open-source/sample code. Jokes in poor taste causing deep offense to some team members resulting in industrial tribunal. Throw-away remarks that were actually racist/sexist/gender-ist causing people to be fired. While we all need to have some outlets for frustration/fun/japing about, the source code is not the place to do this, IMO. You wouldn't put expletives/jokes/offensive comments in a Contract, Help Pages, Blueprints, or other professional document, even though those documents may be read even less often than the source code. If team leaders get all heavy-handed about it, there's going to be upset, so I say 'gently discouraged' by means of a quiet word with problem engineers and provided suitable venting mechanisms to let off steam, whether that be Facebook, instant messaging, air hockey or a punch-bag. It's no defense to say that comments are compiled out either - what about JavaScript, or any other dynamic client-side code? Here are some of the real-world experiences I've had that have shaped my opinion: While working at Microsoft, I spotted that one software engineer didn't know the correct spelling of "couldn't" - he missed the o,l and d - and had peppered much of his code with long explanations of how he couldn't get X to work because Y person was causing problem Z. His code was great; his spelling was not so good. Suffice to say, any subsequent reviewer of this code (e.g. me) was alarmed to see a large number of random swears in the code. Some of this code went on to be shown to partners (driver writers). Imagine their horror at seeing the swears. The rants ideally should have been to the project manager in verbal form (in which case person Y may be pulled in to the discussion) or perhaps commit messages, but not in the source. At one company, a foreign-language-speaking individual joined a predominantly english-speaking team. He wrote comments in his language, thinking that nobody else could read them. This was fine, until Babelfish/Google Translate released a 'to English' option for his language, at which point the rest of the team translated a few comments and were appalled at the filthy and often derogatory comments the guy had been making about the company, his team and a female coworker. Awkward . At another company, one guy was really taken with ASCII art and put all sorts of art into his source code, unspotted (or perhaps blessed) by code reviewers. After a while, he dwelled on dragons, for some reason, usually with some kind of tag line. Later on, a Welsh person joined the team. The national emblem of Wales is a red dragon, so the new guy was initially cheery about the pictures, but then offended when some of the silly tag lines could be construed as offensive. Yes, some team leader mediation required, but this shouldn't have happened. Names/specifics removed to protect the innocent. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/50928",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/4603/"
]
} |
51,062 | What is the best practice for constructor parameter validation? Suppose a simple bit of C#: public class MyClass
{
public MyClass(string text)
{
if (String.IsNullOrEmpty(text))
throw new ArgumentException("Text cannot be empty");
// continue with normal construction
}
} Would it be acceptable to throw an exception? The alternative I encountered was pre-validation, before instantiating: public class CallingClass
{
public MyClass MakeMyClass(string text)
{
if (String.IsNullOrEmpty(text))
{
MessageBox.Show("Text cannot be empty");
return null;
}
else
{
return new MyClass(text);
}
}
} | I tend to perform all of my validation in the constructor. This is a must because I almost always create immutable objects. For your specific case I think this is acceptable. if (string.IsNullOrEmpty(text))
throw new ArgumentException("message", nameof(text)); If you are using .NET 4 you can do this. Of course this depends on whether you consider a string that contains only white space to be invalid. if (string.IsNullOrWhiteSpace(text))
throw new ArgumentException("message", nameof(text)); | {
"source": [
"https://softwareengineering.stackexchange.com/questions/51062",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/6770/"
]
} |
51,076 | I have some code that is failing because of NullPointerException. A method is being called on the object where the object does not exist. However, this led me to think about the best way to fix this. Do I always code defensively for nulls so that I future proof code for null pointer exceptions, or should I fix the cause of the null so that it won't occur downstream. What are your thoughts? | If null is a reasonable input parameter for your method, fix the method. If not, fix the caller. "Reasonable" is a flexible term, so I propose the following test: How should the method hande a null input? If you find more than one possible answer, then null is not a reasonable input. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/51076",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/7529/"
]
} |
51,133 | I have noticed a recent trend in requesting programmers who are rockstars . I get it, they're looking for someone who is really good at what they do. But why (pray) make the reference to a rockstar? Do these companies really want these traits as a real rockstar? Party all night and wake up to take care of quick business in the morning ? Substance abuse, Narcissism with celebrity, Compensation well exceeding their management, Excellent at putting on a short-lived show, Entertainment instead of value, 1 hit (project) wonders or single-genre performers, Et cetera What is wrong with Senior or Principal Software Engineer who has an established and proven passion for the business? Rather do we mean quite the opposite, someone who: rolls up the sleeves and gets to work, takes appropriate direction and helps influence teams, programs in lessons' learned and proper practices, provides timely communication to the whole team, can code and understand multiple languages, understands the science and theory behind computation, Is there a trend to diversify the software engineering ranks? How many software rockstars can you hire before your band starts breaking up? Sure, there are lots of folks doing this stuff on their own, maybe even a rare few who do coding for show, but I wager the majority is for business. I don't see ads for rockstar accountants, or rockstar machinists, or rockstart CFOs. What makes the software programmer and their hiring departments lean towards this kind of job title? | The term "rockstar" implies a certain amount of glamour, flash, sexiness, maybe even dangerousness, characteristics which really good programmers generally don't exhibit, but might wish they did. I wouldn't take it too literally. That is to say, it's a buzzword, and like many such, not particularly useful. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/51133",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/2274/"
]
} |
51,286 | I assume they weren't able to sit in front of a computer for the whole day like we do today. So how did they write their program? On a piece of paper and type it later when the computer is available? How did they do their testing? | Circa 1974, you'd sit at a convenient desk and write your program out long hand on paper. You'd test it by walking through it in your head using test data. When you were satisfied that your program was correct, you'd go to the punch card room and transcribe your program onto punch cards, one 80 character line per card. You'd also punch cards for any data your program might need. Then you'd also punch a few incredibly cryptic cards in Job Control Language (JCL) that would tell the computer how to compile and run your program, and what input/output devices it would use. Then you'd take your cards to the 'IO Window', where you'd hand them to a clerk. When your turn came, the clerk would load your cards into a hopper, and push a button to tell the computer to start reading them. The output of your program would generally go to a line printer or a drum plotter. When your program was done, the clerk would collect your cards, and your hard copy output, and put them in a pigeon hole where you could pick them up. You'd pick up the output, review the resuilts, and repeat the process. It would take anywhere from 20 minutes to 24 hours for a complete cycle. You can probably imagine that you were not happy when you found that the only output was a printed message from the compiler telling you that your program had a syntax error. You might also have access to a computer through a teletype, so you could actually have an interactive session with a remote computer. However, typing on a teletype was physically painful (very stiff keys, and loud), so you still generally wrote and tested your program on paper first. By 1976 UNIX systems and mini-computers like the PDP 11-70 were becoming more common. You usually worked in a room full of video terminals with 25x80 character displays. These were connected to the computer via serial lines. Crude, but not too dissimilar from working at a command prompt today. Most editors back then were pretty crappy though. Vi was an amazing improvement. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/51286",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/8486/"
]
} |
51,291 | Inspired by this question . I heard that some very very early versions of C compilers for personal computers (I guess it's around 1980) resided on two or three floppy disks and so in order to compile a program one had to first insert the disk with "first pass", run the "first pass", then change to the disk with "second pass", run that, then do the same for the "third pass". Each pass ran for dozens of minutes so the developer lost lots of time in case of even a typo. How realistic is that claim? What were actual figures and details? | Absolutely. I had Microsoft C (version 1.0, I think) for a Zenith Z100 computer in the early 80s that was delivered on several 5.25" 360K floppy disks. The Z100 didn't have a hard disk so I had to frequently swap floppies as I switched between the editor, compiler, and linker. Compile and link times of several minutes were not unusual. It got to be so annoying I paid $500 for a 2MB (yes, megabyte) memory expansion board so I could load all of the files into a RAM disk. That cut the time down to about 30 seconds. Funny ... I actually enjoyed programming back in those days because it was fun. Today it's work. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/51291",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/587/"
]
} |
51,307 | I had a search but didn't find what I was looking for, please feel free to link me if this question has already being asked. Earlier this month this post was made: http://net.tutsplus.com/tutorials/php/why-youre-a-bad-php-programmer/ Basically to sum it up, you're a bad programmer if you don't write comments. My personal opinion is that code should be descriptive and mostly not require comment's unless the code cannot be self describing. In the example given // Get the extension off the image filename
$pieces = explode('.', $image_name);
$extension = array_pop($pieces); The author said this code should be given a comment, my personal opinion is the code should be a function call that is descriptive: $extension = GetFileExtension($image_filename); However in the comments someone actually made just that suggestion: http://net.tutsplus.com/tutorials/php/why-youre-a-bad-php-programmer/comment-page-2/#comment-357130 The author responded by saying the commenter was "one of those people", i.e, a bad programmer. What are everyone elses views on Self Describing Code vs Commenting Code? | I prefer writing self documenting code. A guide for this is Clean Code . This of course does not mean one should never use comments - they have their role, but IMHO you should use them carefully. This earlier answer of mine on SO explains my thoughts on the topic in more detail. Of course, as @Niphra noted, it is always worth double checking that what I believe to be clean is really understandable by others. However, this is a question of practice too. Back in the uni I wrote cryptic pieces of code simply due to using strange and funny names for all code entities, according to my whim. Until my teacher threw back one of my assignments, politely noting that he couldn't figure out which module was the main :-) That was a good lesson, so I strove to focus on writing ever more readable code since. Nowadays I hardly get complaints from teammates. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/51307",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/11132/"
]
} |
51,403 | Should programmers who build websites/web applications understand cryptography? I have no idea how most crypographic algorithms work, and I really don't understand the differences between md5/des/aes/etc. Have any of you found any need for an in-depth understanding of cryptography? I haven't needed it, but I wonder if perhaps I'm missing something. I've used salt + md5 hash to encrypt passwords, and I tell webservers to use SSL. Beyond that, I can't say I've used much else, nor can I say with any certainty how secure these methods are. I only use them because other people claim they are safe. Have you ever found a need to use cryptography in web programming aside from these two simple examples? | Web programmers should know that they should never ever try to implement cryptography themselves. In particular, that means that no non-security-expert should be touching any of the cryptographic primitives directly. They shouldn't be thinking at the level of AES, SHA-1, etc. Instead, they should be using high-level functions to encrypt and sign messages, and to "hash" passwords. Why? Because otherwise, people get misled into thinking that: AES-256 is "great encryption", despite the fact that they're using it in ECB mode, or using non-random IV values, etc. (In some modes, non-random but unique IVs is okay. In others, not so much.) They can use the same symmetric key for encrypting multiple messages (or worse, store the symmetric key in the code for direct use). They might even decide to use a passphrase as the key directly, without using any key derivation functions. They can use RSA to encrypt data directly. They can simply "salt and MD5" their passwords to keep them safe. (If you think rainbow tables are the weakest link, think again .) Just to be on the same page, none of the above items are okay . If you don't get that, then you shouldn't be touching crypto with a 10-foot pole! (AES-256 is great encryption, but only if you use it properly. "It's not the size that matters, it's what you do with it." :-)) What kind of high-level functions am I talking about? I personally recommend the use of an OpenPGP (for data at rest) or SSL (for data in motion) library. These protocols rigidly specify the correct use of asymmetric, symmetric, and hash algorithms. For example, with OpenPGP: It does not use RSA to encrypt data directly, but instead generates a random session (symmetric) key per message (this is important), and uses RSA to encrypt that session key. Even then, it will pad the session key using the PKCS #1 v1.5 encoding ( EME-PKCS1-v1_5 ). Padding is extremely important for RSA. ( Some think using v1.5 padding is okay. Others think OAEP is better. ) It uses a key derivation function to transform passphrases into keys. (In OpenPGP parlance, it's called an S2K, but I think "key derivation function" is the more standard term.) It handles picking a good mode, so you will never end up using ECB. It handles key management for you, so you don't have to make ad-hoc decisions about which keys are trustworthy, etc. Summary: if you're not a security expert, and you're thinking at the level of AES, or SHA-1, or (heaven forbid) MD5, you're doing it wrong . Use a library written by security experts (like Bouncy Castle), that implement protocols designed by security experts (like OpenPGP for encryption, or bcrypt or scrypt for password hashing), instead of rolling your own. I'm not a crypto expert by any means, but I know enough not to try to design my own ad-hoc protocols. Just to be clear, this entire post is Cryptography 101 material . So if this post doesn't 100% make sense to you, then you definitely should not get anywhere near cryptography. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/51403",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/17612/"
]
} |
51,429 | What languages (or classes (as in paradigms) of programming languages, plus a recommended language of that class) should every computer science student be taught in college according to you? Motivate your answers; why that language? What use will one have from it? What concepts does it teach (better than language X does)? Note/clarification : This question is about computer science with heavy focus on software engineering, not pure computer science. It is still computer science education and not software engineering education which is the focus. | I personally find it somewhat sad that functional languages aren't taught as predominantly as they used to be. I think that at the very least comp sci students should be exposed to a language from all of the major paradigms: procedural, object-oriented, functional, and dynamic. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/51429",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/12750/"
]
} |
51,437 | Just curious, what kinds of temptations in programming turned out to be really harmful in your projects? Like when you really feel the urge to do something and you believe it's going to benefit the project or else you just trick yourself into believing it is, and after a week you realize you haven't solved any real problems but instead created new ones or, in the best case, pleased your inner beast with no visible impact. Personally, I find it very hard to not refactor bad code. I work with a lot of bad legacy code, and it takes some deep breaths to not touch it when I have no tests to prove my refactoring doesn't break anything. Another demon for me in user interface, I can literally spend hours changing UI layout just because I enjoy doing it. Sometimes I tell myself I'm working on usability, but the truth is just I love moving buttons around. What are your programming demons, and how do you avoid them? | "We will come back to this and fix it later. We just need it working now!" | {
"source": [
"https://softwareengineering.stackexchange.com/questions/51437",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/3939/"
]
} |
51,553 | To start an open-source project is not just to throw up the source code on some public repository and then being happy with that. You should have technical (besides user) documentation, information on how to contribute etc. If creating a checklist over important things to do, what would you include on it? | The most important thing is: use the project yourself and get it into a useful state where you enjoy using it. be sure the project works and is useful. Things I'd put in the early priorities are: have a simple "what is it?" web site with links to some discussion forum (whether email or chat) and to the source code repository be sure the code compiles and usually works, don't commit work-in-progress or half-ass patches on the main branch that break things, because then other people's work would be disrupted put a license file in the code repository with a well-known license, and mark the copyright owner (probably you, or your company). don't omit the license, make up a license, or use an obscure license. have instructions for how to contribute, say in a HACKING file or include in your README. This should include where to send patches, how to format patches, code indentation rules, any other important conventions of the project have instructions on how to report a bug be helpful on the mailing list or whatever your forums are After those priorities I'd say: documentation (this saves you work on the mailing list... make a FAQ from your list posts is a simple start) try to do things in a "normal" way (don't invent your own build system or use some weird one, don't use 1-space indentation, don't be annoyingly quirky in general because it adds learning curve) promote your project. marketing marketing marketing. You need some blogs and news sites and stuff like that to cover you, and then when people show up interested, you need to talk to them and be sure they get it working and look at their patches. Maybe mention your project in the forums for related projects. always review and accept patches as quickly as humanly possible. Immediately is perfect. More than a couple days and you are losing lots of people. always reply to email about the project as quickly as humanly possible. create a welcoming/positive/fun atmosphere. don't be a jerk. say please and thank you and hand out praise. chase off any jackasses that turn up and start to poison the community. try to meet people in person when you can and form bonds. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/51553",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/12750/"
]
} |
51,621 | Very often I'm working on small projects only for myself. I'm working on one machine, but recently I thought about using some kind of version control nevertheless. This would have some benefits as for example: I don't have to care anymore for local backup Mistakes can easily made undone History can be maintained But on the other hand it has also some drawbacks like for example: Additional resources required Time to setup, get used to it, etc. From your experience, is it a good thing to use revision control when your working alone? | Yes. All it takes is a single mistake and you'll be kicking yourself for it. You're also in the position to choose which version control system (VCS) is used. If there is any possibility that you'll work in a development team in the future, this is a great time to give yourself hands-on experience with a VCS. SVN and Git (or Mercurial) would be great starting points and should only take a couple of hours to grasp the basic commands in each VCS. Now to debunk what the negative points... 1) Additional resources required The only resource required is disk space. Since this is a small percentage (smaller in Git than X ) of your total code, I don't think this will be an issue. It doesn't cost any money either. 2) Time to setup, get used to it, etc. There will be time required to learn it, but it is only a few hours for each of these (as mentioned above). On the longer term, it has the potential to save you an infinite amount of time (and so much more). Once you've mastered the basics of a VCS, it will be far less finicky than performing the local backup you have in mind. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/51621",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/13137/"
]
} |
51,643 | Fairly recently I started maintaining my own open source JavaScript library. I created it to solve a pretty specific need but fairly regularly see questions that can be solved (in whole/part) by using my library. I post my answer and make sure to always include that I maintain the library. I feel for open source projects this may not be such a big deal but where do you draw the line? (ex: commercial products) Is it ethical for a programmer to promote his/her own library? When is it not? | Why would it be unethical? You are not making financial gains, and it also lets readers know that: You have a specific bias towards this library, as you are the creator/maintainer. If they have questions about it, you're probably the best person to ask. I suppose if you are really worried about it, you could always try to mention any similar library that might be used to solve the same problem, and quickly compare yours to theirs. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/51643",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/18088/"
]
} |
51,670 | I've worked on both Mac and Windows for awhile. However, I'm still having a hard time understanding why programmers enthusiastically choose Mac OS X over Windows and Linux? I know that there are programmers who prefer Windows and Linux, but I'm asking the programmers who would just use Mac OS X and nothing else, because they think Mac OS X is the greatest fit for programmers. Some might argue that Mac OS X got the beautiful UI and is nix based, but Linux can do that. Although Windows is not nix based, you can pretty much develop on any platform or language, except Cocoa/Objective-C. Is it the applications that are only available on Mac OS X? Does that really make it worth it? Is it to develop iPhone apps? Is it because you need to upgrade Windows every 2 years (less backwards compatible)? I understand why people, who are working in multimedia/entertainment industry, would use Mac OS X. However, I don't see what strong merits Mac OS X has over Windows. If you develop daily on Mac and prefer Mac over anything else, can you give me a merit that Mac has over Windows/Linux? Maybe something you can do on Mac that cannot be done in Windows/Linux with the same level of ease? I'm not trying to do another Mac vs. Windows here. I tried to find things that can be done on Mac but not on Windows with the same level of ease, but I couldn't. So, I'm asking for some help. | Disclaimer for comments: I use what I've determined to be best for me . Those reasons are what I've listed here. Finding the "greatest fit for programmers" in all situations is impossible, and I don't think anyone bases their choice on thinking they've found it. It's a Unix-based OS with a great user interface installed on great hardware. Hardware that is getting ever-cheaper as Apple grows and uses their buying power to secure lower and lower prices of great components. I use Mac because: Unix-based OS Terminal is a bash shell with all the standard Unix utilities Built-in SSH!! Comes preloaded with software that works great with Unix: SVN, PHP, Apache2, etc. I find a Unix filesystem so much more comfortable to use in development. Great UI - In my humble opinion, you can't beat the usability of a Mac. I love the Mac-specific apps I use daily - Mail, Adium, Textmate Great OS - Can't beat the install of (most) Applications - drag and drop. The /Library folder is well organized and easy to find what I need if I have to dig into preferences, copy an application's support files, install a new Preference Pane. Speaking of System Preferences - another great feature of Mac. Great support for other apps - IntelliJ IDEA is as good on a Mac as anywhere. Skype. Chrome. Firefox. Adobe suite. Great hardware - I work on a $1200 13" Macbook Pro (external 24" monitor at desk). Cheaper than my coworkers on high-end Windows desktops and I'm not running into processing issues or memory issues (none of us really are these days). And you just can't beat the quality of an Apple laptop (developing on laptops is a different question but I can't live without one - wire-free for meetings, private Skype calls, or taking my work home exactly as I left it. And 10 hour battery life!). Lastly, I don't develop on any Microsoft-stack technologies, so I don't feel limited there. I don't think there are any things I can't do on Windows. The above is a list of things that, as a sum, just make Mac the preferred option. If you are looking for singular things, there are a few tasks that I feel I can simply do more easily on Mac: (As mentioned above, probably the biggest) Terminal > Putty + Cygwin + Powershell Migrate everything to a new computer Uninstall applications or install multiple versions of applications (browsers, usually) | {
"source": [
"https://softwareengineering.stackexchange.com/questions/51670",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/10451/"
]
} |
51,712 | What is the deal with functional programming? I see talk about it a lot but to be honest I've never found them at all useful. Why do so many universities apparently teach them? | Start with Why Functional Programming Matters . Then move to Why Why Function Programming Matters Matters . A few bullets: Functional programming allows you to reason about problems differently Functional abstraction is very powerful and allows you to DRY up your code in ways not available to other paradigms In our multi-core future, functional languages may be easier to split into simultaneous tasks (though not-strictly-functional languages are working hard on the problem as well). It's easier to prove that programs written in pure functional languages (no side effects) are mathematically correct. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/51712",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/-1/"
]
} |
51,769 | I know that Android uses the Java language with a limited Java SDK and that Google claims it isn't Java. But is it right to say that Android is a Programming language?
Or is it more right to say that Android is a framework in Java? Or is both true? | Android is an OS (and more, look below) which provides its own framework.
But it is definitely not a language. From developer.android.com Android is a software stack for mobile devices that includes an operating system, middleware and key applications. The Android SDK provides the tools and APIs necessary to begin developing applications on the Android platform using the Java programming language. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/51769",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/-1/"
]
} |
51,839 | This has happened to most of us... You come to work one day. Everything seems normal - the sun is shining, birds are chirping, but you notice a couple of weird things on your way to work that remind you of the déjà vu cat in the Matrix. You get into the office and there are a lot of phones ringing - but it could just be that they are doing a new sales promotion. You settle in, when you notice a dark cloud hovering over you. It takes you a couple of moments, but you recognize the cloud is your boss. Usually he checks on you every morning with his "Soooo Peeeeter, how about those TCP/IP reports?" routine, but today he forgot everything about common manners and rudely invaded your personal space. No "Good Morning", just some drooling, grunts and curses. He reminds you a bit of a neanderthal who is trying to get away from a cyber -toothed tiger, fear and panic all compressed in a tight ball. You try to decipher the new language that he created since yesterday and you start understanding that something bad happened overnight - the production system went down. Now, your system is usually used by clients during regular working hours from 9-5, but for whatever reason you didn't get any alerts on your beeper (for people under 30 - a beeper was like a mobile phone that could only ring and tell you who beeped you). You'll need to remember to charge it next time. So it is now 8:45am, and the system MUST be up at 9am. Every 10 seconds, your boss lets out yet another curse which communicates to you that another customer is having problems getting into the system. Also, several account managers are now hovering over your boss trying to make him understand how clients are REALLY REALLY suffering. Everyone is depending on you to get the system up ASAP and at the same time is hindering your progress by constantly distracting you. How do you keep cool in a situation like this? | In the situation, ask your boss to help you by keeping all the other folks away from you (which gives him something to do somewhere else). When you get it up and running again, ask your boss for a meeting to evaluate and establish procedures for avoiding this happening again. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/51839",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/15950/"
]
} |
51,933 | Let's assume the situation where a team of four developers is building an application. During the testing phase, bugs are reported by users. Who should fix them? The person who committed the erroneous code, or anyone who is free? What is preferred approach in agile development (scrum)? | The preferred approach in agile development would be to get them fixed as quickly as possible, by whomever is available. This is simply because the ownership of the code does not fall to any one person, but to the entire developer group. If one individual is consistently causing bugs, that is another issue that needs to be addressed separately. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/51933",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/18431/"
]
} |
51,966 | We use compilers on a daily basis as if their correctness is a given, but compilers are programs too, and can potentially contain bugs. I always wondered about this infallible robustness. Have you ever encountered a bug in the compiler itself? What was it and how did you realize the problem was in the compiler itself? ...and how do they make compilers so reliable? | They get tested thoroughly via usage by thousands or even millions of developers over time. Also, the problem to be solved is well defined (by a very detailed technical specification). And the nature of the task lends itself easily to unit / system tests. I.e. it is basically translating textual input in a very specific format to output in another kind of well defined format (some sort of bytecode or machine code). So it is easy to create and verify test cases. Moreover, usually the bugs are easy to reproduce too: apart from the exact platform and compiler version info, usually all you need is a piece of input code. Not to mention that the compiler users (being developers themselves) tend to give far more precise and detailed bug reports than any average computer user :-) | {
"source": [
"https://softwareengineering.stackexchange.com/questions/51966",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/92/"
]
} |
52,267 | Why should I write a commit message? I don't want to and i think its stupid every single time. A GUI frontend I use which will go unnamed forces you to do it. I hear other doing it every time even if they are using the VCS on the command line. If I commit several times a day and haven't finished a feature what am I writing about? I ONLY ever write a message after many commits and I feel it's time for a mini tag or when I do an actual tag. Am I right or am I missing something? also I am using a distributed system | Because when some poor maintainer is hunting a bug and finds that it was added in rev. xyz, he will want to know what rev. xyz was supposed to do. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/52267",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/-1/"
]
} |
52,486 | If this is the wrong place to ask this question, please let me know. I'm a Python programmer by occupation. I would love to learn C. Indeed, I have tried many times, but I always get discouraged. In Python, you write a few lines and the program does wonders. In C, I can't seem to be able to do anything useful. It seems to be very complicated to even connect to the Internet. Do you have any suggestions on what I can do to learn C? Are there are any good websites? Any cool projects? Thanks | Don't get discouraged. Python is a high-level programming language. In comparison to C, it can produce wonders in a tiny amount of code. Don't start by trying to mimic Python results in C - you'll be promptly disheartened. Programming in a C requires a different style of thinking and understanding because as you're interacting with the computer at a more intimate level. Here's a good starting point for learning C: Books C Programming Language (2nd Edition) C Interfaces and Implementations: Techniques for Creating Reusable Software Expert C Programming Online Material Learning GNU C Programming in C | {
"source": [
"https://softwareengineering.stackexchange.com/questions/52486",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/18593/"
]
} |
52,534 | The Java is GPL license (reference from wikipedia). I am not sure I can use it in in commercial projects. I already have a website written in Java and I plan to use this for commercial use. Is that illegal? | The GPL license applies to the source of Java itself, not to applications created using Java. You should only be concerned if you are extending / modifying the Java language itself and reselling the result as a commercial product (or any other non-GPL license). | {
"source": [
"https://softwareengineering.stackexchange.com/questions/52534",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/15012/"
]
} |
52,549 | We're looking to create regular backups of a couchdb database, to ship offsite. What's the least intrusive way to obtain these - ideally without interrupting or significantly slowing down performance on the existing database server? | The GPL license applies to the source of Java itself, not to applications created using Java. You should only be concerned if you are extending / modifying the Java language itself and reselling the result as a commercial product (or any other non-GPL license). | {
"source": [
"https://softwareengineering.stackexchange.com/questions/52549",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/4091/"
]
} |
52,681 | The trend in application design and development seems to be starting with the "guts": the domain, then data access, then infrastructure, etc. The GUI seems to usually come later in the process. I wonder if it could ever be useful to build the GUI first... My rationale is that by building at least a prototype GUI, you gain a better idea of what needs to happen behind the scenes, and so are in a better position to start work on the domain and supporting code. I can see an issue with this practice in that if the supporting code is not yet written, there won't be much for the GUI layer to actually do. Perhaps building mock objects or throwaway classes (somewhat like is done in unit testing) would provide just enough of a foundation to build the GUI on initially. Might this be a feasible idea for a real project? Maybe we could add GDD (GUI Driven Development) to the acronym stable... | Building fast GUI prototypes is a good idea, and I have heard it being used in many projects. Early feedback is valuable indeed. However, it has its dangers: it is very tempting (for managers / users) to use the prototype code further and build the final application on it, which can have very bad long term consequences (this actually happened in one of the projects I have worked on, and it resulted in a "working" product with nonexistent architecture and lots of maintenance headache for us) for the average user, the GUI is the application . Thus once they see a nice looking GUI, they tend to believe most of the actual work is done, so they may get very upset with the "little remaining work" dragging on so long :-/ Mitigating these risks requires active discussion and possibly education of your users and/or managers. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/52681",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/8187/"
]
} |
52,698 | Is it a good idea if I put the books I read on my resume, or at least those related to software development? | I've read a lot of resumes, some good, some bad, and they've never had a list like this. Honestly, it would indicate to me a candidate who has extremely little hands-on experience and is desperate to pad a thin resume. And a candidate who hasn't bothered to research common resume formats. Such a resume would most likely be circular-filed. By me, anyway. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/52698",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/1725/"
]
} |
52,729 | I currently work in a professional capacity as a software engineer working with the Android OS. We work at integrating our platform as a native daemon among other facets of the project. I primarily work in Java developing the SDK and Android applications, but get to help with the platform in C/C++. Anywho, I have a great interest to work professionally developing low level for linux. I am not unhappy in my current position and will hang around as long as the company lets me (as a matter of fact I quite enjoy working there!), but I would like to work my way that direction. I've been working through Linux Kernel Development (Robert Love) and The Linux Programming Interface (Michael Kerrisk) (In addition to strengthening my C skills at every chance I get) and casually browsing Monster and similar sites. The problem I see is, there are no entry level positions. How does one break into this field? Anytime I see "Linux Systems Programmer" or "Linux Device Driver Programmer" they all require at the minimum 5-7 years of relevant experience. They want someone who knows the ropes, not a junior level programmer (I've been working for 7 months now...). So, I'm assuming, that some of you on stackoverflow work in a professional capacity doing just what I would like to do. How did you get there? What platforms did you use to work your way there? Am I going to have a more difficult time because I have my bachelors in CSC as opposed to a computer engineer (where they would experience a bit more embedded, asm, etc)? EDIT FOR CLARIFICATION! I am aware of the opensource nature of the linux kernel/drivers etc. I plan on contributing regardless of where my day job is. I'm more curious of what kinds of entry level positions will allow me to do relevant work and get paid doing it! Thanks for all the replies so far! | I write Linux device drivers for my company, and I got into this position by knowing the most about Linux development in my department and they promoted/hired me into a new role. It was very much a junior level style entry, so they do exist and don't lose hope! My immediate advice for you is to see if you can narrow down your focus. Kernel programming is very different from system programming is very different from device driver programming. Kernel developers focus on interfaces, data structures, algorithms, and optimization for the core of the operating system. System programmers write daemons, utilities, and other tools for automating common or difficult tasks. Device drivers use the interfaces and data structures written by the kernel developers to implement device control and IO. A very good kernel programmer may not know a lot about interrupt latency and hardware determinism, but she will know a lot about how locks, queues, and Kobjects work. A device driver programmer will know how to use locks, queues, and other kernel interfaces to get their hardware working properly and responsively, but he won't be as likely to fix a page allocation bug or write a new scheduler. So, pick what interests you most, perhaps by surveying development lists or bug trackers, and see what kinds of impact you want to make. Then, contribute and build experience by working on those projects and efforts. When your name/email is attached to code in the kernel mainline, then you'll have experience you can point to in your resume/cover letter for other positions :-) | {
"source": [
"https://softwareengineering.stackexchange.com/questions/52729",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/18667/"
]
} |
52,749 | To me, Visual Basic seems clumsy, ugly, error-prone, and difficult to read. I'll let others explain why . While VB.net has clearly been a huge leap forward for the language in terms of features, I still don't understand why anyone would choose to code in VB over, say, C#. However, I still see (what seems to be) the vast majority of commercial web apps from "MS shops" are built in VB. I could stand corrected on this, but VB still seems more popular than it deserves. Can anyone help answer any (or all) of these questions: Am I missing something with VB? Is it easier to learn, or "friendlier" than C#? Are there features I don't know about? Why is VB/VB.net so frequently used today, especially in web projects? | VB can be used to make GUI's (pronounced gooey) to track IP addresses. This is often used in crime solving . | {
"source": [
"https://softwareengineering.stackexchange.com/questions/52749",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/-1/"
]
} |
52,816 | I find working in isolation, on a piece of code that won't be seen by anyone else for weeks, draining. I'm looking for ideas to try to keep myself productive and motivated. What do you do to remain motivated and productive, when given a long term programming task, and working on your own (for example, from home, without any team-mates or coworkers)? | Maintain a balance. Given something novel (e.g. playing a game, having a beer, etc.), we're able to focus and do that one thing for an extended amount of time. The only way to power through a mundane task (without overdosing on coffee ) is to maintain a balance . I say 'mundane' because if this were a task you were really passionate about, you wouldn't have meandered to site and asked this question. Suggestions: Balance. Work on the long-term project for an hour two and then reward yourself with something you enjoy. Embrace the break from the task. Repeat. Long-term mindset : thinking about the awesome work you will be doing after (this less interesting job) is invigorating. Break your project down into small tasks . Tasks that will only take a couple of hours to complete. As you complete each of these small tasks, it'll give you the feeling of progression. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/52816",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/4091/"
]
} |
52,833 | A friend told me that writing comments inside methods is not good. He said that we should have comments only for the method definitions(javadocs) but not inside the method body. It seems he read in a book that having comments inside the code means there is a problem in the code. I don't quite understand his reasoning. I think writing comments inside the method body is good and it helps other developers to understand it better and faster. Please provide your comments. | Your friend is wrong, and very few programmers would agree with him. You should write comments whenever and wherever you feel they best aid understanding. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/52833",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/-1/"
]
} |
52,913 | My development team just grew by 100% (from 1 developer to 2). My new cohort want to invest in bug tracking software. Is there benefits to such software for such a small team? | I think all the "yes" answers go a long way to endorsing the idea. But I'm going to throw out the idea that the decision is based on a few questions: How do you want to communicate as a team? With 2 developers, you are now a team. How do you want to communicate? Plenty of agile teams live with in person discusions and white board sketches. But they may also go so far as to write things down, especially if it's a bug that won't be high on the priority list for a while. How do you want to communicate with your customers? I don't know the answer to this, but if you have any reason to publish bugs (or fixed bugs in a version release document), then you're going to end up writing them down eventually. Might as well pick a low-stress bug management system and be done with it. Is there value to preserving history? The answer may be "not right now" but if you think that in the future, you'd like to see the trend of bugs so you can see places that users are having the most problems, or places where you could spend some time checking and reviewing before a major release - then get a bug tracking system. The thing about history is that the day you want the record is not the day you should start keeping records. IMO, the answers to these questions are more about where you see the product going and how you want to grow your team and less about whether "2 people = reason for bug tracking system". The bigger question is probably "is a bug tracking system worth the time to configure & manage and the cost of purchasing?" | {
"source": [
"https://softwareengineering.stackexchange.com/questions/52913",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/-1/"
]
} |
52,925 | I am wondering: how feasible it would be to start developing a social networking website entirely based on silverlight; This has been fairly discussed over the years in favor of HTML. Has something changed with silverlight improvements over the years? What about: * Performance
-- active users
-- technology used, MVVM + MEF (possibility of lags, server memory overflow...)
* Security
--- WCF Ria Services & EF What are your thoughts on this subject? | I think all the "yes" answers go a long way to endorsing the idea. But I'm going to throw out the idea that the decision is based on a few questions: How do you want to communicate as a team? With 2 developers, you are now a team. How do you want to communicate? Plenty of agile teams live with in person discusions and white board sketches. But they may also go so far as to write things down, especially if it's a bug that won't be high on the priority list for a while. How do you want to communicate with your customers? I don't know the answer to this, but if you have any reason to publish bugs (or fixed bugs in a version release document), then you're going to end up writing them down eventually. Might as well pick a low-stress bug management system and be done with it. Is there value to preserving history? The answer may be "not right now" but if you think that in the future, you'd like to see the trend of bugs so you can see places that users are having the most problems, or places where you could spend some time checking and reviewing before a major release - then get a bug tracking system. The thing about history is that the day you want the record is not the day you should start keeping records. IMO, the answers to these questions are more about where you see the product going and how you want to grow your team and less about whether "2 people = reason for bug tracking system". The bigger question is probably "is a bug tracking system worth the time to configure & manage and the cost of purchasing?" | {
"source": [
"https://softwareengineering.stackexchange.com/questions/52925",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/-1/"
]
} |
53,086 | C# allows the use of #region / #endregion keywords to make areas of code collapsible in the editor. Whenever I do this though I do it to hide large chunks of code that could probably be refactored into other classes or methods. For example I have seen methods that contain 500 lines of code with 3 or 4 regions just to make it manageable. So is judicious use of regions a sign of trouble? It seems to be so to me. | A code smell is a symptom which indicates that there is a problem in the design which will potentially increase the number of bugs: this is not the case for regions, but regions can contribute creating code smells, like long methods. Since: An anti-pattern (or antipattern) is a pattern used in social or business operations or software engineering that may be commonly used but is ineffective and/or counterproductive in practice regions are anti-patterns. They require more work which doesn't increase the quality or the readability of the code, which doesn't reduce the number of bugs, and which may only make the code more complicate to refactor. Don't use regions inside methods; refactor instead Methods must be short . If there are only ten lines in a method, you probably wouldn't use regions to hide five of them when working on other five. Also, each method must do one and one only thing . Regions, on the other hand, are intended to separate different things . If your method does A, then B, it's logical to create two regions, but this is a wrong approach; instead, you should refactor the method into two separate methods. Using regions in this case can also make the refactoring more difficult. Imagine you have: private void DoSomething()
{
var data = LoadData();
#region Work with database
var verification = VerifySomething();
if (!verification)
{
throw new DataCorruptedException();
}
Do(data);
DoSomethingElse(data);
#endregion
#region Audit
var auditEngine = InitializeAuditEngine();
auditEngine.Submit(data);
#endregion
} Collapsing the first region to concentrate on the second is not only risky: we can easily forget about the exception stopping the flow (there could be a guard clause with a return instead, which is even more difficult to spot), but also would have a problem if the code should be refactored this way: private void DoSomething()
{
var data = LoadData();
#region Work with database
var verification = VerifySomething();
var info = DoSomethingElse(data);
if (verification)
{
Do(data);
}
#endregion
#region Audit
var auditEngine = InitializeAuditEngine(info);
auditEngine.Submit(
verification ? new AcceptedDataAudit(data) : new CorruptedDataAudit(data));
#endregion
} Now, regions make no sense, and you can't possibly read and understand the code in the second region without looking at the code in the first one. Another case I sometimes see is this one: public void DoSomething(string a, int b)
{
#region Validation of arguments
if (a == null)
{
throw new ArgumentNullException("a");
}
if (b <= 0)
{
throw new ArgumentOutOfScopeException("b", ...);
}
#endregion
#region Do real work
...
#endregion
} It's tempting to use regions when arguments validation starts to span tens of LOC, but there is a better way to solve this problem: the one used by .NET Framework source code: public void DoSomething(string a, int b)
{
if (a == null)
{
throw new ArgumentNullException("a");
}
if (b <= 0)
{
throw new ArgumentOutOfScopeException("b", ...);
}
InternalDoSomething(a, b);
}
private void InternalDoSomething(string a, int b)
{
...
} Don't use regions outside methods to group Some people use them to group together fields, properties, etc. This approach is wrong: if your code is StyleCop-compliant, then fields, properties, private methods, constructors, etc. are already grouped together and easy to find. If it's not, than it's time to start thinking about applying rules which ensure uniformity across your codebase. Other people use regions to hide lots of similar entities . For example, when you have a class with hundred fields (which makes at least 500 lines of code if you count the comments and the whitespace), you may be tempted to put those fields inside a region, collapse it, and forget about them. Again, you are doing it wrong: with so many fields in a class, you should think better about using inheritance or slice the object into several objects. Finally, some people are tempted to use regions to group together related things : an event with its delegate, or a method related to IO with other methods related to IO, etc. In the first case, it becomes a mess which is difficult to maintain, read and understand. In the second case, the better design would probably be to create several classes. Is there a good use for regions? No. There was a legacy use: generated code. Still, code generation tools just have to use partial classes instead. If C# has regions support, it's mostly because this legacy use, and because now that too many people used regions in their code, it would be impossible to remove them without breaking existent codebases. Think about it as about goto . The fact that the language or the IDE supports a feature doesn't mean that it should be used daily. StyleCop SA1124 rule is clear: you should not use regions. Never. Examples I'm currently doing a code review of my coworker's code. The codebase contains a lot of regions, and is actually a perfect example of both how to not use regions and why regions lead to bad code. Here are some examples: 4 000 LOC monster: I've recently read somewhere on Programmers.SE that when a file contains too many using s (after executing "Remove Unused Usings" command), it's a good sign that the class inside this file is doing too much. The same applies to the size of the file itself. While reviewing the code, I came across a 4 000 LOC file. It appeared that the author of this code simply copy-pasted the same 15-lines method hundreds of times, slightly changing the names of the variables and the called method. A simple regex allowed to trim the file from 4 000 LOC to 500 LOC, just by adding a few generics; I'm pretty sure that with a more clever refactoring, this class may be reduced to a few dozens of lines. By using regions, the author encouraged himself to ignore the fact that the code is impossible to maintain and poorly written, and to heavily duplicate the code instead of refactor it. Region “Do A”, Region “Do B”: Another excellent example was a monster initialization method which simply did task 1, then task 2, then task 3, etc. There were five or six tasks which were totally independent, each one initializing something in a container class. All those tasks were grouped into one method, and grouped into regions. This had one advantage: The method was pretty clear to understand by looking at the region names. This being said, the same method once refactored would be as clear as the original. The issues, on the other hand, were multiple: It wasn't obvious if there were dependencies between the regions. Hopefully, there were no reuse of variables; otherwise, the maintenance could be a nightmare even more. The method was nearly impossible to test. How would you easily know if the method which does twenty things at a time does them correctly? Fields region, properties region, constructor region: The reviewed code also contained a lot of regions grouping all the fields together, all the properties together, etc. This had an obvious problem: source code growth. When you open a file and see a huge list of fields, you are more inclined to refactor the class first, then work with code. With regions, you take an habit of collapsing stuff and forgetting about it. Another problem is that if you do it everywhere, you'll find yourself creating one-block regions, which doesn't make any sense. This was actually the case in the code I reviewed, where there were lots of #region Constructor containing one constructor. Finally, fields, properties, constructors etc. should already be in order . If they are and they match the conventions (constants starting with a capital letter, etc.), it's already clear where on type of elements stops and other begins, so you don't need to explicitly create regions for that. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/53086",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/4335/"
]
} |
53,088 | Is it advisable to create multiple classes within a .cs file or should each .cs file have an individual class? For example: public class Items
{
public class Animal
{
}
public class Person
{
}
public class Object
{
}
} Dodging the fact for a minute that this is a poor example of good architecture, is having more than a single class in a .cs file a code smell? | The example you gave is actually fine in my opinion. You are declaring inner classes , so it is perfectly sensible to keep them in the same file . The only way around this would be to make your Items class a partial class and split it over multiple files. I'd consider this bad practice. My general policy for nested classes is that they should be small and private. There are two exceptions to this: you are designing a class cluster (more common in objective-c), thus it may be sensible to use the partial class approach you need an enum that is only used with the public API of the parent class. In this case I prefer having an public enum declared inside the parent class instead of polluting my namespace. The enum being an "inner enum" effectively results in giving it a well defined scope. If you word the question a little differently and ask about "Should I put each namespace -level class into its own file" then my answer would be "yes". When designing classes we respect the Single Responsibility Principle. Reading code becomes a lot easier if its shape follows its semantics, hence splitting files by class is sensible. From a mechanical point of view, having a file per class has several advantages. You can open multiple classes at the same time in different windows. This is especially important since no serious developer works with less than two screens. Being able to have more context in front of my head means I can keep more context in my head. (Most IDE's will allow you to open the same file twice, but I find this awkward). The next important aspect is source control and merging. By keeping your classes separate, you avoid a lot of hassle when changes to the same file are made because separate classes need to be changed. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/53088",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/-1/"
]
} |
53,262 | Do you use both Client-side and Server-side validation techniques side by side when validating input from a user, e.g. via a contact form? If so, is it really necessary? Are you over engineering? | Yes, and you should. This maintains instant user feedback without wasted postbacks whilst also guarding against users disabling JavaScript. This is how the ASP.NET Validation Controls work. It is certainly not over-engineering as using one without the other has drawbacks. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/53262",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/14954/"
]
} |
53,274 | I've often seen such comments be used: function foo() {
...
} // foo
while (...) {
...
} // while
if (...) {
...
} // if and sometimes even as far as if (condition) {
...
} // if (condition) I've never understood this practice and thus never applied it. If your code is so long that you need to know what this ending } is then perhaps you should consider splitting it up into separate functions. Also, most developers tools are able to jump to the matching bracket. And finally the last is, for me, a clear violation to the DRY principle; if you change the condition you would have to remember to change the comment as well (or else it could get messy for the maintainer, or even for you). So why do people use this? Should we use it, or is it bad practice? | I would say if you code is so long that you can't easily follow your braces, your code needs refactoring, for most languages. However, in templating languages (like PHP) it could be valid, because you might have a large block of HTML that separates the beginning and end of the condition or loop structure. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/53274",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/2210/"
]
} |
53,302 | Possible Duplicate: Would you allow your programmers to use Messenger and social networks like Facebook? A manager may believe that using IM clients in the office is not acceptable, but many programmers use them for legitimate purposes, for example in order to easily contact one another. Do you think the IM chat prohibition is reasonable? | The trouble with those policies, (IM are only an example ; you could also quote firewall blocking some websites), is simple : they believe they can force people to work by cutting their distractions. Fact is, when one doesn't want to work, one will always find a way not to. At the end of the day, what matters is if the job's been done. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/53302",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/12697/"
]
} |
53,498 | I'm just starting to learn C#. Coming from a background in Java, C++ and Objective-C, I find C#'s Pascal-casing its method-names rather unique, and a tad difficult to get used to at first. What is the reasoning and philosophy behind this? I'm guessing it is because of C# properties. Unlike in Objective-C, where method names can be exactly the same as an instance variables, this is not the case with C#. I would guess one of the goals with properties (as it is with most of the languages that support it) is to make properties truly indistinguishable from variables and methods. So, one can have an "int x" in C#, and the corresponding property becomes X. To ensure that properties and methods are indistinguishable, all method names I'm guessing are also therefore expected to start with an uppercase letter. (This is just my hypothesis based on what I know of C# so far—I'm still learning). I'm very curious to know how this curious guideline came into being (given that it's not something one sees in most other languages where method names are expected to start with a lowercase letter) (EDIT: By Pascal-casing, I mean PascalCase (which is basically camelCase but starting with a capital letter). Method names typically start with a lowercase letter in most languages) | It is matter of taste. Someone once decided to use Pascal style for names and it became a standard. I have a wild guess that it was Anders Hejlsberg , who was architect of Delphi, successor of Pascal. Case style is same there as in C#. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/53498",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/6837/"
]
} |
53,612 | I'm a PHP programmer, and until now I have not needed to learn algorithms... Now I'm start learning Python (a real programming language), because I need to use matplotlib. Does it make sense to start by reading a Python algorithm book and then learn matplotlib and numpy or should I jump to matplotlib and numpy? Best Regards, | Absolutely. Without knowing algorithms, how else are you going to understand how a program does what it does? It's essential to understand algorithm development so you can program more efficiently and write better programs. Starting out you should at least know the basics of concepts like control flow (maybe via state automaton, but that's not always necessary) and Big O notation and how it can affect performance. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/53612",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/17966/"
]
} |
53,624 | I'd like to get into web development using C++ as the "scripting language" on the server-side. My server infrastructure is *nix based, so doing web development in C++ on Azure is not applicable and C++/CLI ASP.NET is also not applicable. Separate from legacy CGI applications, can web development be done using C++ ? | Absolutely. There are even several frameworks for developing them, including Wt , cppcms , CSP , and others. FastCGI's mainline implementation is in C, and directly supports several languages , including C++. Any programming language that can parse strings can be used in CGI or a servlet. Any language that can implement bindings with C libraries can also be used to develop modules for ISAPI- or Apache-compatible servers. It's not particularly easy in C++, and good templating engines are few and far between, but it can be done. Of course, the question of whether this is a good idea is another matter entirely. :) Do note: Major websites like Amazon.com, eBay, and Google do use C++ for parts of their infrastructure. Realize, however, that Google only uses C++ for speed-critical systems, and Amazon.com only relatively recently switched away from Lisp (which angered some of their senior staff :). Facebook formerly compiled PHP to C++, but their HipHop compiler (written partly in C++) has since been retooled as a bytecode virtual machine. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/53624",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/12536/"
]
} |
53,669 | I've found, what I personally believe to be a bad habit that a lot of developers seem to have adopted. Code in various places in the applications I have seen are commented out (lots of it) and checked into the mainline. Now, I don't have a problem with people doing this with their own branches, but is it a good idea to comment out 60 or so lines of code and check it into trunk? A colleague of mines has unearthed a lot of this and had spend half a day removing the commented out code just to tidy things up. Is there any benefit in checking in commented out code and leaving it for another developer to have to tidy up after you? After all, version control does have a history, so you can easy pull back any code that's been removed. | I see no benefit to this, especially in the main trunk. Commented out code only provides a quick way for a developer to roll back. The whole point of source control is to have every working version, so that you can keep track of what has changed. Link - talks about commenting out large blocks and why they are ugly, and hard to maintain. Comments do not get verified, and do not evolve with the code base around them. In that way, the commented code will NEVER be valid in the future.... so why keep it at all when there is a perfectly good version that compiles just BEFORE it in the commit list of the main trunk? | {
"source": [
"https://softwareengineering.stackexchange.com/questions/53669",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/15455/"
]
} |
53,747 | I work on a team that has been flat organizationally since it's creation several months ago. My manager is non-technical and this means that our whole team is responsible for decision-making. My manager is beginning to realize that there are several benefits to having a lead developer, both for his sake (a single point of contact and single responsible party for tasks) and ours (dispute resolution, organized technical guidance, etc.). Because the team has been flat, one concern is that picking one lead developer may discourage the others. A non-developer suggested to my manager that rotating the lead developer is a possible way to avoid this issue. One developer would be lead one month, another the next, and so on. Is this a good idea? Why or why not? Keep in mind that this means all developers — All developers are good, but not necessarily equally suited to leadership. And if it is not , how do I recommend that we avoid this approach without seeming like it's merely for selfish reasons? | Don't rotate. I don't think anyone gains anything from the position being rotated (apart from the ones that don't deserve to be the lead might get more money than they are currently receiving). Having a brilliant lead developer who can do the following, does wonders for the development process:. Knows how to delegate. Is in control. Is an experienced developer. He's a single source for the rest of the team to look up to and seek advice from. He's also the mediator between higher-level management and the core development team. I don't know of any managerial team that likes dealing with change (unless they're the ones instigating it). If you really are the best suited for the position, everyone would know that. Everyone would know that (e.g. higher-level management, your teammate, etc.). State that you don't believe rotating the position is worthwhile (if you believe so). Then sit back and let them do the appointing - refrain from name-dropping or any sort of self promotion as this would make you look unprofessional | {
"source": [
"https://softwareengineering.stackexchange.com/questions/53747",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/2314/"
]
} |
53,827 | There is a developer, let's call him John (currently on probationary period) in company(pretty small company approx. 10 persons, 3 developers, one of them works long in this company know business process around and can be consider as Team leader) who didn't want to use any IDE at all(he is using some text editor). Application this team working on is medium size Java application with Spring Hibernate technology stack and refactoring/adding new features to launch new version of that application in near future. John performance working without IDE on this application is lower then desirable, team leader's (lets call him Bill) assumption is this happens because John is not using IDE. Bill try to persuade John to use IDE, but this idea meets a lot of resistance and main reason is "I want to be in total control of what I am doing, so I need to write all code by myself". How can Bill convince John to try to use IDE? (considering the fact what Bill already protected John from company owner several complaints about John performance) Updated:
Bill decide to try and convince John one more time if that attempt will be unsuccessful then he won't try to force John to use IDE and rather look at if features promised by John are delivered in time or not. | You've more or less already answered the question: He's on probation He's not productive enough So, he needs to be made clearly aware that: He needs to be more productive or he won't survive his probation. He is liable to be more productive with a proper IDE than with a good text editor. A good IDE is not about giving up control over the code you write its about providing you with tools to enable you to produce working code faster regardless of whether you choose to use the code generation and templating facilities that may be available within the IDE . Lack of willingness to adapt to his environment might also be a concern. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/53827",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/4164/"
]
} |
53,878 | This statement suggests that statically typed languages are not ideal for web sites: I’ll contrast that with building a
website. When rendering web pages,
often you have very many components
interacting on a web page. You have
buttons over here and little widgets
over there and there are dozens of
them on a webpage, as well as possibly
dozens or hundreds of web pages on
your website that are all dynamic.
With a system with a really large
surface area like that, using a
statically typed language is actually
quite inflexible. I would find it
painful probably to program in Scala
and render a web page with it, when I
want to interactively push around
buttons and what-not. If the whole
system has to be coherent, like the
whole system has to type check just to
be able to move a button around, I
think that can be really inflexible. Source: http://www.infoq.com/interviews/kallen-scala-twitter Is this correct? Why or why not? | I totally disagree. As systems grow bigger, statically typed languages ensure robustness at component level and thus flexibility at system level. Also, the example given by the author doesn't really make any sense. It rather seems as if this guy doesn't know that polymorphism can be achieved by other means than duck typing. There's a number of people who claim dynamic languages to be superior, but that's usually based on their lack of experience with expressive type systems that for example support structural subtyping, algebraic datatypes and first order functions. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/53878",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/19012/"
]
} |
53,880 | This question made me think that there was a better question to ask. What did you learn in school that you didn't care about at the time, but turned out to be useful or you had to relearn in the workplace because you had it in school, but didn't retain the information and you needed it? (I mean for software related jobs.) I think this might help college students identify some of what they really should be paying attention to while they are in school. | Girls. You may think I am joking but I am not. Don't go mad or anything, you still need to learn the academic stuff. But you also need to spend some time learning about the people, from the people around you. That includes the half of humanity who have completely different interests and attitudes from you and your friends, but who you will still want to get along with. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/53880",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/1093/"
]
} |
53,948 | Coding is only one aspect to professional programming. My job requires me to code, but it also requires me to do other things for extended periods – sometimes days or weeks go by when I'm not just coding . I fear letting hard-won programming skills atrophy while I sit in meetings, draw architecture diagrams and annotate requirements. (Not to mention I don't trust people to write requirements who don't understand the code.) I can't just read books and magazines about coding. I'm involved in some open source projects in my free time, and stackoverflow and friends help a bit, because I get the opportunity to help people solve their programming problems without micromanaging, but neither of these are terribly structured, so it's tempting to work first on the problems I can solve easily. I guess what I'd like to find is a structured set of exercises (don't care what language or environment) that… …I can do periodically …has some kind of time requirement so I can tell if I've been goofing off …has some kind of scoring so I can tell if I'm making mistakes Is there such a thing? What would you do to keep your skills fresh? | Code katas come to mind right away. The idea is that these are repeatable exercises that you can practice until you know them cold, and you repeat them periodically to keep your chops up. Some are focused on programming, some are more open-ended and focus on thinking and design. They can be done in any language or environment and some people also use them to try out or learn new approaches (for example, test-driven development). The site I linked to above has many ideas for katas. Another fairly famous one is the Bowling Game from Uncle Bob Martin. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/53948",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/19033/"
]
} |
54,069 | Anecdotally, I've visited many .aspx website that require a significant amount of load time for each page. Is my experience unique? If not, why might an ASP.Net website load slowly? Edit: It's now about 7 years later (12/29/2017). The good news is I don't see this problem much anymore, maybe because Google started penalizing sites that load too slowly. I now use ASP.NET MVC with good results, currently running on Vultr private virtual servers (Azure was way too slow when we tried it.) Some of the worst offenders I see now are the CMS systems like Wordpress and Drupal, probably running on hardware that's too slow or underspecced for the amount of traffic the site gets. -HK1 | Five possibilities I can think of (aside from some advanced caching techniques and such): Improper sizing of the web server for ASP.NET (i.e. thought a server sized for classic ASP would be fine) Forgetting to remove <compilation debug="true"/> from the web.config and getting less than optimal code . JIT for the first visit Code embedded in the page (as opposed to the compiled code-behind) that requires compilation before and in addition to the JIT. ViewState (for ASP.NET WebForms) gets too big . | {
"source": [
"https://softwareengineering.stackexchange.com/questions/54069",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/17302/"
]
} |
54,132 | I've been programming for a while on different languages. I never really studied programming at school nor worked on a team of more than two people (me included). Still, I've been a professional developer for over three years. Last year, I took over my first C# project and it ended up being fine. I can't help but think that because I learned and worked alone I must be missing some concepts/hints/edge. For those who've been solo developers before being part of a team, can you share your experience? Did you realize you were missing something? Did you find it hard? Did you learn faster after? | I can guarantee to you that everyone who is working in isolation, especially when writing code, but with any and all kinds of work, believes, 100%, that they are doing much better than they actually are. So, the first advantage you'll find (and I found) when starting to work with a team is actually a rather humbling one. Working with others, especially when they are talented, but even in the unusual circumstance where you are better than they are in every possible imaginable way, will, at first, highlight your own mistakes and weaknesses. If you are able and capable of taking criticisms well (even if they only come from your own observations of your work, as seen when working with others), the experience can be a good one. This first benefit lasts - as you work more and more with teams, and with more and more people, you will continuously get more and better feedback about your mistakes - which you can then use to correct. You'll also receive positive feedback about successes, but, especially in the programming world, negative feedback prevails (no one opens up an "anti-bug" in the bug tracking system to tell they looked at your module and saw an especially nice algorithm implementation). In the long term, there are other lessons you will learn, some of which are very difficult to learn in isolation: Writing legible code is difficult, especially when you're writing for other programmers, at different skill levels, to understand. Writing code which others have to bug fix is difficult. Helping others to fix your bugs requires skills you may not have. Fixing and integrating with code others have written takes skills you don't get to practice that often when working on your own. You can get some of this by integrating or fixing old code you wrote yourself, but only in a very limited sense. Project management is more important in a team, and activities you may not know about, or find useful, in isolation, become crucial in a team. Such as properly maintaining a clean release branch in CVS. Or testing code before checking in. It's easier to learn new technologies when there's someone around to explain them to you, and answer your questions about them as you learn. Interruptions are more frequent in a team, which can be both fun and frustrating. You'll learn how to switch contexts better when working with more teammates. Working in a team often requires structure that's not existent when working solo - from activities such as code-reviews, to simple but not necessarily obvious things such as always being at work on time. It's easier to specialize very deeply when in a team, with trusty team-mates. You'll find that learning something, then teaching it to others, and then answering detailed questions about it (and solving related problems), especially when the people you deal with are other programmers, will push you to levels of skill and knowledge you were not aware of | {
"source": [
"https://softwareengineering.stackexchange.com/questions/54132",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/19090/"
]
} |
54,155 | A long-time client has asked us to help screen their work machines for pornography. They're worried about liability if sensitive materials were found. Their main concerns (for obvious reasons) are video, audio, and image files. If possible, they'd also like to scan text-based documents for inappropriate content. They have a hierarchy of not-for-work content starting with blatantly illegal (I don't have to list details), moving down to obviously offensive, and also including things that may be offensive to some - think lingerie ads, joke cards featuring butt cracks, and anything related to Howie Mandel. My questions are: Is this ethical? I think it is since every employee legally agrees that their work machine belongs to the company and is subject to search. The screenings are not to occur on personal machines brought to work. Is it feasible? I've done a lot of image processing/indexing but this seems like a whole new world of complexity. Any references to successful techniques for discovering porn? Is it appropriate for me to archive the results when something is discovered? | You can do this with 90% Headology , 10% software. Firstly, quietly scan employees computers, build a database of files and sizes for each employee. Then leak a memo that all PC's will be scanned for questionable content, i.e. The bosses have a Shazam like program that can identify porn etc. Then a couple of days later, scan the computers for files and sizes again. Look at any deleted files, are they movie or image files? Then those are the employees you need to keep an eye on. Routinely scan those employees PC's for images and movies, and manually check them for questionable content. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/54155",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/17429/"
]
} |
54,338 | I've read online on multiple occassions that MySQL is a bad database. The places I've read this include some threads on Reddit, but they never seem to delve in on why it's a poor product. Is there any truth to this claim? I've never used it beyond a very simple CRUD scenario, and that was for a university project during my second year. What pitfalls, if any , are there when choosing MySQL as your database? | There are two different kinds of pitfalls, those from using MySQL as your flavor of RDBMS and those of using an RDBMS instead of other types of databases. Using MySQL instead of alternate RDBMS's: See this wikipedia comparison table for various comparisons of MySQL to other RDBMS's. You will very likely prefer Microsoft SQL Server if you are building an ASP.NET web application, as the system is designed to work well together, I believe for both developers and IT managers You may prefer Oracle if you are need of a more advanced DB set-up involving clustering (Oracle RAC) or advanced DB procedures. Not that MySQL doesn't support many of the features, but from what I've seen, you are much more likely to find an experienced DBA who knows this stuff for Oracle than MySQL. See this question at SO for extensive comparisons of PostgreSQL to MySQL that say it better than I can. Even more SO: Disadvantages of mysql versus other databases Using RDBMS's instead of other types: Read up on the idea of "NoSQL" and other DB types: NoSQL: If Only It Was That Easy (above is linked from) When to use MongoDB or other document oriented database systems? ( more ) https://softwareengineering.stackexchange.com/questions/5354/are-nosql-databases-going-to-take-the-place-of-relational-databases-is-sql-going When to use CouchDB vs RDBMS RDBMS's make heavy use of indexing for performance. On very large databases, this can have an adverse affect on performance or even reduce the optimal number of rows per DB/Shard requiring more hardware Not as ideal if you are storing a massive amount of data but reading less frequently If you don't have lots of foreign relationships, relational databases might not be ideal (document storage) All that being said, MySQL is a great database, and I haven't worked at a company in the last 8 years that hasn't used it, in a wide variety of web applications (such as e-commerce, web sites/apps, enterprise/B2B, web games). For a large majority of typical web application use cases, it's a great choice. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/54338",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/-1/"
]
} |
54,359 | If you have to explain the concept of multi-threading to a seven year old kid how would you do it? I recently got this question in an interview. I came up with a story using jobs (the task to be done) and workers (the threads) but it was not entirely convincing (considering the kid is too young). If you were asked to describe this, how would you do it? | Describe what it is, just leave out the technical terms except for definitions: You have five jobs to do. You need to start working on all of them right now. Each job is a thread. You are the processor. Spend a little bit of time working on each job and then move to the next one, making sure you give attention to all of them. If you have more people, a job can only be worked on by one person at a time. Since each person can work on a different job, more people can get all the work done faster, if you have more than one job. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/54359",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/17887/"
]
} |
54,373 | I'm a bit confused about the whole NoSQL thing and such.
When would you choose to use something like MongoDB over something like Oracle or MySQL?
I don't really understand the "difference" as far as usage goes between them. From my understanding NoSQL type databases aren't meant to replace RDBMSes, but what exactly are they meant to do? | I've used CouchDB before for three pets projects. A micro blogging system. For saving information for a little note taking app I made. A general purpose brainstorming application. The main reason why I chose this over something like MSSQL or MySQL is the flexibility you obtain when using it. No rigid schema. If three months down the line you need a certain table to have an extra field, and this and that, you just change it and it ripples out from there on out. I used Beginning CouchDB by Apress to learn how to use it. For example, CouchDB uses json to communicate to/from the database. If your language can POST data, then you can use it to communicate with the DB. Also read: Why should I use document based database instead of relational database? on StackOverflow | {
"source": [
"https://softwareengineering.stackexchange.com/questions/54373",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/-1/"
]
} |
54,451 | What is the difference between a library, framework, and API? They all seem the same to me. I would like to hear peoples thought of this. | A library is a collection of functions / objects that serves one particular purpose. you could use a library in a variety of projects. A framework is a collection of patterns and libraries to help with building an application. An API is an interface for other programs to interact with your program without having direct access. To put it another way, think of a library as an add-on / piece of an application , a framework as the skeleton of the application, and an API as an outward-facing part of said app. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/54451",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/10090/"
]
} |
54,467 | I took up a small CSS challenge to solve for a client and I'm going to be paid on a hourly rate.
I eventually solved it, it took 5 hours but I spent roughly 25% of the time in the wrong track, trying a CSS3 solution that only worked in recent browsers and finally discovering that no fallback is possible via JS (like I originally thought). Should I charge the client that 25%? More details:
I didn't provide an estimate, I liked the challenge per se, so I started working on it before giving an estimate (but I have worked with him before, so I know he's not one of those people that have unrealistic expectations). At the very worst I will have spent 5 unpaid hours on an intriguing CSS challenge. And I will give the fairest possible estimate for both of us, since I will have already done the work. :) Edit:
Thank you all, I wish I could accept more than one answer! I ended up not billing him for the extra hours (I billed him for 3 and a half), but I mentioned them, so that he knows I worked more on it than I billed him for. Maybe that's why he immediately accepted the "estimate" (which in that case wasn't an estimate, hence the quotes). | I often have such situations when I spend a few hours doing something, then noticing that there is an easier one-line solution, or that my first idea was too bad, etc. In general, in those cases, I make the difference between three situations: The newly discovered solution was not obvious and/or an average developer would probably be on the wrong track too and/or the wrong track was a prerequisite to find the final solution. In this case, I charge the customer for the time spent on the wrong track. The newly discovered solution was not so obvious, but probably a lot of average developers would go this way directly. In other words, if I thought better before starting to write code, I could probably find the final solution directly, or maybe not. In this case, I charge the customer, but reduce the price by half or a percentage which seems the most adequate. Obviously, I was too stupid, too sleepy, or not thought at all before I started to write code, since the final solution was extremely easy to find. In this case, even if I spent two days on the wrong track, it's my own responsibility and the customer doesn't have to pay for that. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/54467",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/19171/"
]
} |
54,506 | I have noticed that this is a frequent issue among younglings from technical areas such as ours. In the beginning of our careers we simply don't know how to sell ourselves to our employers, and random guy #57 (who is a programmer, but not as good as you - technically) ends up getting a raise or a promotion because he knows how to communicate and market himself better than you. Many have probably seen this happen in the past, and most certainly many more will in the future. What kind of skill/ability (either technical, or of other nature) do you think is relevant to point out when doing a job interview or asking for a raise, besides listing all the programming languages and libraries you know? | Get things done. The people that have the power to promote you will only be impressed when they see results . Simply learning many libraries won't be enough to gain you any sort of promotion. It probably will, however, gain you respect from those immediately working with you. Also, don't think of it as 'selling' yourself. It's a case of showing that you're worth your weight in gold; this can be done by making it obvious to the higher-ups that you accomplish great work and that you're capable of achieving many things. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/54506",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/5385/"
]
} |
54,621 | I was interviewing with a "too proud of my java skills"-looking person. He asked me " What is your knowledge on Java IO classes.. say.. hash maps? " He asked me to write a piece of java code on paper - instantiate a class and call one of the instance's methods. When I was done, he said my program wouldn't run. After 5 minutes of serious thinking, I gave up and asked why. He said I didn't write a main function so it wouldn't run. ON PAPER. [I am too furious to continue with the stupidity...] Believe me it wasn't trick questions or a psychic or anger management evaluation thing. I can tell from his face, he was proud of these questions. That " developer " was supposed to " judge " the candidates. I can think of several things: Hit him with a chair (which I so desperately wanted to) and walk out. Simply walk out. Ridicule him saying he didn't make sense. Politely let him know that he didn't make sense and go on to try and answer the questions. Don't tell him anything, but simply go on to try and answer the questions. So far, I have tried just 4 and 5. It hasn't helped. Unfortunately many candidates seem to do the same and remain polite but this lets these kind of "developers" just keep ascending up the corporate ladder, gradually getting the capacity to pi** off more and more people. How do you handle these interviewers without bursting your veins? What is the proper way to handle this, yet maintain your reputation if other potential employers were to ever get to know what happened here? Is there anything you can do or should you even try to fix this? P.S. Let me admit that my anger has been amplified many times by the facts: He was smiling like you wouldn't believe. I got so many (20 or so) calls from that company the day before, asking me to come to the interview, that I couldn't do any work that day. I wasted a paid day
off. | Laugh along with him. "Oh yes! No main() function. Also, it's written on a piece of paper, which couldn't execute code anyway. And I forgot to draw a 'Compile' button. Ho ho, we're funny guys!" Then try to move onto the next question. Yes, he is nit-picking, but it's really nothing to get upset about. Make it apparent that you think the answer he was looking for was so obvious to you that you didn't think it worth mentioning. He's probably interviewing a range of candidates from programming geniuses through to people who've never programmed and are just desperate for a job. Sometimes as an interviewer it's worth checking the obvious. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/54621",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/-1/"
]
} |
54,717 | Are there reasons other than budget for hiring "entry-level" programmers? | Great developers once had no experience, too. Great developers are not only expensive but also hard to find. So, if you have a high-quality screening and hiring process, hiring entry-level developers can be a great way to find those up-and-comers and turn them into great developers . | {
"source": [
"https://softwareengineering.stackexchange.com/questions/54717",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/5402/"
]
} |
54,718 | While there is such operator - ** in Python, I was wondering why Java and C++ don't have one too. It is easy to make one for classes you define in C++ with operator overloading (and I believe such thing is possible also in Java), but when talking about primitive types such as int, double and so on, you'll have to use library function like Math.power (and usually have to cast both to double). So - why not define such an operator for primitive types? | Generally speaking, the primitive operators in C (and by extension C++) are designed to be implementable by simple hardware in roughly a single instruction. Something like exponentiation often requires software support; so it's not there by default. Also, it's provided by the standard library of the language in the form of std::pow . Finally, doing this for integer datatypes wouldn't make much sense, because most even small values for exponentiation blow out the range required for int, that is up to 65,535. Sure, you could do this for doubles and floats but not ints, but why make the language inconsistent for a rarely used feature? | {
"source": [
"https://softwareengineering.stackexchange.com/questions/54718",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/23673/"
]
} |
54,770 | I usually leave my email address as a courtesy in case someone wants to ask me a question about it later. Do other people leave more or less information than that? Does anyone leave a phone number?? | I almost never leave my name nor e-mail address : It tends to get copy-pasted (yeah, bad practice) ; and I end up with people contacting me for code I didn't write When the code is modified, the contact info is not removed nor update ; and I end up with people contacting me about code that has changed so much I don't even recognize it. Instead, I prefer pointing people to the code repository (SVN, Git, ...) : there, they can have the full history -- and find out who wrote / modified the portion they have a question about. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/54770",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/19258/"
]
} |
55,047 | All programming languages are having their design flaws simply because not a single language can be perfect, just as with most (all?) other things. That aside, which design fault in a programming language has annoyed you the most through your history as a programmer? Note that if a language is "bad" just because it isn't designed for a specific thing isn't a design flaw, but a feature of design, so don't list such annoyances of languages. If a language is illsuited for what it is designed for, that is of course a flaw in the design. Implementation specific things and under the hood things do not count either. | One of my big annoyances is the way switch cases in C-derived languages default to falling through to the next case if you forget to use break . I understand that this is useful in very low level code (eg. Duff's Device ), but it is usually inappropriate for application level code, and is a common source of coding errors. I remember in about 1995 when I was reading about the details of Java for the first time, when I got to the part about the switch statement I was very disappointed that they had retained the default fall-through behaviour. This just makes switch into a glorified goto with another name. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/55047",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/12750/"
]
} |
55,104 | I'm not a game developer or anything, but I know that Java is not very widely used for game development. Java should be fast enough for most games, so where's the catch? I can think of some reasons: Lack of game developers with expertice in Java Lack of good game development frameworks Programmers don't want to accept Java as a games programming language. Most only accept C++ as that? No support for game consoles (though the PC market still exists) It could of course be something else. Could someone who knows the business better than me explain why Java isn't getting momentum when it comes to game development? | Several reasons: In the old days, you needed "direct access" for performance and UI. This predates VM languages like Java and C#. Most consoles (e.g., 360, PS3) do not have a JVM, so you cannot reuse code from the PC version. It is much easier to compile C++ code to support various devices. Most mainstream game engines (e.g., Unreal) have C++ bindings. There are some Java connectors (e.g., for OpenGL) but nothing like it. For PC gaming, DirectX doesn't really have strong Java support (if at all). Web based games run in JavaScript or Flash. You could write them in Java though using things like GWT. The iPhone runs an Objective-C variant. Java is primarily used in Android games these days, simply because it's the primary language for that platform. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/55104",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/12750/"
]
} |
55,284 | I try to teach myself a new programming language in regular intervals of time. Recently, I've read how Lisp and its dialects are at the complete opposite end of the spectrum from languages like C/C++, which made me curious enough to know more about it. However, two things are unclear to me, and I'm looking for guidance on them : Is LISP still practiced/used in todays world, or is it a legacy language like FORTRAN/COBOL ? I mean, apart from maintaining existing code, is it used on new projects at all ? What is the most widely used dialect ? I came across Scheme and Common Lisp as the 2 most prevalent dialects, and wanted your opinion as to which is the most favored/useful one to learn - and would be immensely gratified if you can suggest any resources for a rank beginner to start from. While eager to learn a language which is fundamentally different from the procedural languages I'm used to, I don't want to invest undue effort in something if its totally obsolete - I'd still learn it if it was professionally "dead", but only with an academic perspective... | I rather like Scheme, if you want to work with the JVM you should check out Clojure, which is a lisp that is designed to work in the JVM. And yes Lisp is still worth learning to see how powerful such a minimal design can be! The folks who created lisp got some things really right. Its amazing how many of the cool new features of modern languages lisp had in the 1960's! For an embeded scheme try guile: http://www.gnu.org/s/guile/ | {
"source": [
"https://softwareengineering.stackexchange.com/questions/55284",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/15028/"
]
} |
55,326 | I am working on a product that I don't feel is completely ready but I have a set of users that are very interested in using it now as "alpha" testers. I would like to give them the product now for free as "alpha" testers, but I would like to later license the software. Is this possible? Can anyone point me to any links/books/articles/etc? Thanks. EDIT: Due to the lack of my clarity and the reponses to the question I thought I should add this statement. I haven't decided if I am going to close-source or open-source this project yet. The user base that wants to get their hands on it now has kind of surprised me and I was concerned about what my options are as far as being able to give it to them now as open-source and later change to closed-source, or even vice versa. Thanks to everyone who has answered and commented. I appreciate it the insights. | If you own all the code, then you can re-release it under whatever license you want. The problem comes if someone has contributed patches to the project. What usually happens is that they own that code, and they contributed it under the current software license. (You should have some sort of contributers agreement that covers this.) Therefore, to re-license it you will need their permission. Depending on how may contributers you have, this could be hard. For a reference for this, look at the OpenStreetMap license change and how long that is taking. But IANAL, disclaimer etc. And what people will make of your license change is another matter, as another answer comments on. EDIT: I answered this as it relates to open source products. But re-reading the question I see it's not clear whether it's closed source or open. If you are talking about 2 closed source licenses it's easy: If you own all the code, then you can re-release it under whatever license you want. Just make your alpha version commercial closed source license, all the usual terms (no copying etc.), just with a price of zero. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/55326",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/19431/"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.