source_id
int64 1
4.64M
| question
stringlengths 0
28.4k
| response
stringlengths 0
28.8k
| metadata
dict |
---|---|---|---|
55,373 | Just ran across this, and wondering if anyone has a way to prove or disprove this statement: Something to keep in mind ... what's
the half-life of knowledge in high
tech? It tracks with Moore's Law: half
of everything you know will be
obsolete in 18-24 months. SOURCE: Within answer by Craig Trader to this question " What is the single most effective thing you did to improve your programming skills? " | This statement applies only to ephemeral technologies, which you should only learn as needed anyway. That said, you're going to learn a lot of them over your career. Fundamental programming principles and techniques are eternal. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/55373",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/4942/"
]
} |
55,539 | Note: I am aware of this question. This question is a bit more specific and in-depth, however, focusing on reading the actual code rather than debugging it or asking the author. As a student in an introductory-level computer science class, my friends occasionally ask me to help them with their assignments. Programming is something I'm very proud of, so I'm always happy to oblige. However, I usually have difficulty interpreting their source code. Sometimes this is due to a strange or inconsistent style, sometimes it's due to strange design requirements specified in the assignment, and sometimes it's just due to my stupidity. In any case, I end up looking like an idiot staring at the screen for several minutes saying "Uh..." I usually check for the common errors first - missing semicolons or parentheses, using commas instead of extractor operators, etc. The trouble comes when that fails. I often can't step through with a debugger because it's a syntax error, and I often can't ask the author because he/she him/herself doesn't understand the design decisions. How do you typically read the source code of others? Do you read through the code from top-down, or do you follow each function as it's called? How do you know when to say "It's time to refactor?" | First tip: use an IDE (or a very good editor :)) to spot syntax errors, misplaced parentheses and other trivial mistakes. Second step: Autoformat all code in a format you feel comfortable with. You'd think this doesn't matter much but amazingly, it does. Don't be afraid to rename local variables if they are poorly named. (If you have access to the full system, you can rename anything and you should.) Add comments to yourself when you discover what a certain function/method is doing. Be patient. Understanding alien code isn't easy but there's always a breakthrough moment when most pieces of the jigsaw suddenly fall into place. Until that point it's all hard work and drudgery I'm afraid. The good news is that with practice this eureka moment will come sooner. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/55539",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/8810/"
]
} |
55,679 | I am a true believer in Model Driven Development, I think it has the possibility to increase productivity, quality and predictability. When looking at MetaEdit the results are amazing. Mendix in the Netherlands is growing very very fast and has great results. I also know there are a lot of problems versioning of generators, templates and framework projects that just aren't right for model driven development (not enough repetition) higher risks (when the first project fails, you have less results than you would have with more traditional development) etc But still these problems seem solvable and the benefits should outweigh the effort needed. Question : What do you see as the biggest problems that make you not even consider model driven development ? I want to use these answers not just for my own understanding but also as a possible source for a series of internal articles I plan to write. | There is no golden hammer. What works well in one domain is pretty useless in another. There is some inherent complexity in software development, and no magic tool will remove it. One might also argue that the generation of code is only useful if the language itself (or the framework) is not high-level enough to allow for powerful abstractions that would make MDD relatively pointless. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/55679",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/10422/"
]
} |
55,692 | I am working in an environment wherein we have many projects with strict deadlines on deliverables. We even talk directly to the clients so getting the jobs done and fast is a must. My issue is that i'd always write code for the first solution that comes to my mind, which of course I thought as best at that moment. It always ends up ugly though and i'd later realize that there are better ways to do it but can't afford to change due to time restrictions. Are there any tips by which I could make my code efficient yet deliver on time? | If the code needs to be maintained, explain that additional time is needed to make the code more maintainable, which will save them money on the backend. In other words, make maintainable code a requirement. If they don't care about that, I don't think you need to do anything different, other than getting better all the time and doing the best you can to incorporate best practices whenever possible. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/55692",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/19266/"
]
} |
55,883 | I have a client right now requiring me to develop a school enrollment system. Now this is the first time im having this kind of challenge. Most of the past software that i created are not that complex. I know most all of you have created complex softwares, i just want your advise on this. Should i design first the front end or back end? thanks! Here's a conclusion of an article I found in the internet a while ago. Just whant to share http://www.skitoy.com/p/front-end-vs-back-end-developers-my-take/157 Front-end vs. back-end developers (my take) My personal take Again itβs a matter of training, some broad stroke generalizations: Front end developers Typically donβt have a CS degree, or
have a CS degree from a 3rd tier
school. Work in languages that similar to
basic (see PHP is Basic) Have a visual skill in converting
photoshop documents to CSS/HTML/etc. Have a high tolerance for iterative
programming, due to type free
languages Back end developers Have a CS degree or lots of
experience Tend to me more systematic in their
problem solving approach Donβt mind spending days finding the
one object that is leaking Try and build tools to solve problems | If you start at the back, and go forwards, you run the risk of misunderstanding the client. As you will be creating things that they can't easily see and comprehend, they can't participate very easily in verifying whether you meet the requirements. This means that you might waste a lot of work. If you start at the front, and go backwards, you run the risk that the client will think it's almost done, when all you have done is draw a simple form on the screen. They may then question why it's taking so long, since you had it mostly finished in a few days. You also run the risk of painting yourself into a corner, when you realise that you have to do some complicated work to marry the front to the back, when a more suitable front-end would have been simpler. IMO, you should work on it feature-first. Write the front and back end together, for each feature in the system. This gives the client greater visibility of progress, and it gives them the opportunity to say "no, that's not what I meant", without causing you too much distress. That said, if this is a very large project in which you need to consider the server hardware or the capabilities of any software you rely on (e.g. which database you are using), then you should probably have a good think about that part first. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/55883",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/19543/"
]
} |
56,168 | Everything I've been reading and researching up to this point describes how Agile/Scrum works great with teams of about 4 to 6 members, maybe even more. In my current shop, we have about 8 developers or so, but given the nature of the volume of projects and the number of departments we support, we never have more than 1 or 2 folks assigned to a given project. Can I still use Agile/Scrum with a team of 1 or 2 developers? I'm working on making the pitch to my manager to start working with this methodology, but I need to be able to explain how to scale things back for a small developer crew, or convince them to make sure we get more members on a given project. | You sure can use certain agile principles in your projects, you don't have to use scrum, use whatever will work best for you . You can definitely benefit from some of XP methods and some scrum practices. But probably not "by book", 1-2 person team is just too small even for that little overhead scrum brings, start with what book says and then drop whatever you'll feel irrelevant after some time. Just don't drop retrospectives, it sure is worth the time spent discussing the problems you have, and finding solutions for them. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/56168",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/8095/"
]
} |
56,215 | In C, you cannot have the function definition/implementation inside the header file. However, in C++ you can have a full method implementation inside the header file. Why is the behaviour different? | In C, if you define a function in a header file, then that function will appear in each module that is compiled that includes that header file, and a public symbol will be exported for the function. So if function additup is defined in header.h , and foo.c and bar.c both include header.h , then foo.o and bar.o will both include copies of additup . When you go to link those two object files together, the linker will see that the symbol additup is defined more than once, and won't allow it. If you declare the function to be static, then no symbol will be exported. The object files foo.o and bar.o will still both contain separate copies of the code for the function, and they will be able to use them, but the linker won't be able to see any copy of the function, so it won't complain. Of course, no other module will be able to see the function, either. And your program will be bloated up with two identical copies of the same function. If you only declare the function in the header file, but do not define it, and then define it in just one module, then the linker will see one copy of the function, and every module in your program will be able to see it and use it. And your compiled program will contain just one copy of the function. So, you can have the function definition in the header file in C, it's just bad style, bad form, and an all-around bad idea. (By "declare", I mean provide a function prototype without a body; by "define" I mean provide the actual code of the function body; this is standard C terminology.) | {
"source": [
"https://softwareengineering.stackexchange.com/questions/56215",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/9298/"
]
} |
56,239 | I find that I'm having a great deal of trouble staying alert 8 hours per day. I've heard of people who've negotiated work contracts of just 4 hours/day, arguing that they won't be able to do much more in eight hours. I am often overwhelmed with drowsiness, boredom, distraction. Some days, I seem to blaze through eight hours in a furious explosion of productivity; other days, I hardly get anything done at all. Most days, it's somewhere in between, and I feel bad for wasting a lot of time because I can't muster the concentration to be my best throughout much of the day. I'd like to hear your experiences (tell me I'm not alone!), and, if found, your solutions to this dilemma. Are you productive 8 hours/day almost every day? How? | If we define "productivity" as the measure of physically output usable, functioning code, then this is on average ~3 hr/ day top, more like ~2h/day on average.
And don't feel bad if you can't write code all day - most of the work happens in your head. Granted, this might be an issue with managers caught into the "Why isn't Sam typing" mindset. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/56239",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/9505/"
]
} |
56,375 | We're all aware that magic numbers (hard-coded values) can wreak havoc in your program, especially when it's time to modify a section of code that has no comments, but where do you draw the line? For instance, if you have a function that calculates the number of seconds between two days, do you replace seconds = num_days * 24 * 60 * 60 with seconds = num_days * HOURS_PER_DAY * MINUTES_PER_HOUR * SECONDS_PER_MINUTE At what point do you decide that it is completely obvious what the hard-coded value means and leave it alone? | There are two reasons to use symbolic constants instead of numeric literals: To simplify maintenance if the magic numbers change. This does not apply to your example. It is extremely unlikely that the number of seconds in an hour, or the number of hours in a day will change. To improve readibility. The expression "24*60*60" is pretty obvious to almost everyone. "SECONDS_PER_DAY" is too, but if you are hunting a bug, you may have to go check that SECONDS_PER_DAY was defined correctly. There is value in brevity. For magic numbers that appear exactly once, and are independent of the rest of the program, deciding whether to create a symbol for that number is a matter of taste. If there is any doubt, go ahead and create a symbol. Do not do this: public static final int THREE = 3; | {
"source": [
"https://softwareengineering.stackexchange.com/questions/56375",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/13297/"
]
} |
56,384 | I simply cannot figure this out. Almost all monitors have an aspect ratio where width is much bigger than the height and yet almost all websites are designed exactly for the other way round? I am not really a web developer and am just experimenting stuff at the moment but this madness baffles me!!! Edit: The point is not that I would like to limit the height of a website. The point is that I'd want it to somehow fill all available space when I have my 1920x1080 in landscape mode. Edit 2: See this to understand what I am saying | There's a limit to how far the eye can scan without losing track of what line is next when you get to the end of the current line. There's been a number of studies ( including this one ) about optimum line width and readability. You might be able to boil it down to degrees, but in order to know what is optimal at that point requires you know how far the user is from the monitor. That's why most recommendations are for measurements in em s, characters, or words. The size in pixels is then dependent on the size of the font, but the general consensus still works. I personally would have a very hard time reading an article if it was formatted to use the full width of a 1080p screen. I'm sure you would as well. Our current conventions are derived from print layout design guidelines, and the layout guidelines haven't materially changed. The only difference is that we can interact with the page in a browser. Additional commentary : (from question posed in comments) There are limitations to what HTML can do for you on a screen. For one thing, there are no pages. Only scroll bars. Designing a reasonable behavior for how a browser should display markup is a very difficult proposition. The CSS3 properties for multi-column layout haven't made it out of candidate recommendation stage yet. There are hacks to play with it described in this article at a list apart . However, it's just not quite ready for prime time on most sites yet. As long as your content doesn't require scrolling you should be OK. As soon as it does, the multi-column approach can present some challenges on how the user interacts with the site. Commentary on the Current De Facto Standard : To address the way you reworded your question, and the example you provided, understand that web design has to balance a number of potentially conflicting requirements--and still keep the website looking professional. In order to balance the needs of the designers and the clients, most development houses design for one resolution. That resolution is based on data that can be readily available . They can also be based off of historical web traffic records, or knowledge of the target environment. The most common screen widths (1024, 1280, 1366) are close enough so that if you design for the smallest (1024p) and center the design on the page, it will still look nice on the other slightly larger horizontal dimensions. It is very difficult to get everything aligned right in any format, much less have to support multiple formats depending on the screen width. 1080p users are growing in number, but still make up a very small percentage of everyone they have to support. In case you are interested, I highly recommend perusing some of the design articles at "A List Apart" . You'll start to get the idea that doing what you want is more difficult than it looks. When I posted a question about designing for 1080p , the first initial responses amounted to "Make everything bigger". It's something that just hasn't been on the radar for most web designers yet. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/56384",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/18277/"
]
} |
56,490 | I have been using open source projects for a while and been developing upon the open source applications and every so often I come across the words 'Nightly Build' and I have always been curious as to what it actually means. Does it literally mean the projects are done purely as side projects (usually at night after everyone has finished their day jobs) and there's no true contributor/dedicated development team or is it more complex than that? | No, it means that every night, everything that has been checked into source control is built. That build is a "nightly build". | {
"source": [
"https://softwareengineering.stackexchange.com/questions/56490",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/10964/"
]
} |
56,669 | I'm wondering whether or not I've done the right thing as a contractor. Basically I'm in to my 3rd month, and my current client messed up the payment in the first month, and I just found out that they are again late in paying me for my 2nd month. It wouldn't be so bad if I wasn't in a bit of a financial situation due to this being my first contract experience. But as a matter of principal, I walked out on them and will be telling them that I will not be going back in until they resolve the pay. Part of me feels as though this was not a very professional thing to do, but I also don't feel that it was very professional for them to mess my payments up, twice in a row. Did I make the right decision? I still want to work for these guys and enjoy the job, but I have a life to attend to that requires finances and I can't afford to keep getting messed up with pay like this. I attempted to phrase the question to be oriented around contractors behaviour around clients that mis-treat them, and as some of the answers that have been posted so far, it's a good discussion. The answers coming in are great around different subjective situations. Update: In answer to some of the responses, the set-up is Me -> Umbrella Company -> Client. I am an employee of the Umbrella Company who's terms are that I do not get paid unless the client paid up. Thus, when I found out that the client was going to be late paying the Umbrella company, I was quite upset. Also, I do believe that in THIS instance, it was a bad move to have simply walked out. The most professional way to handle it would have been to have made my issues known to my immediate manager and then let him resolve it, instead, his boss and others found out way before him and he was left with crap to deal with, with no chance of handling it before it got out of control. If I was still unhappy with their solution, then I could of told the Umbrella Company that I wasn't happy and was prepared to walk out, which then they could of advised me in a more professional way what my options were. Later which I found out, there were lots of other more professional ways of handling this. However, I've not contracted for long and had a very emotional response to the situation that I've not been in before. I'm positive that a more professional and mature response would have been anything but simply walking out, creating a difficult situation for my boss and now other people in the company and myself. Absolutely great answers so far. Thank you all for your experienced advice. | I don't believe you did the right thing, honestly. If you truly enjoy the job and want to work for these people, you would not have resorted to burning a bridge with them. Pay issues are always tricky, and it's not terribly professional for them to have botched your pay in the first place, but the question also relates to a definition of severity. Did they botch your pay through some administrative (re: fixable) situation or is the company having financial difficulty and unable to pay? If it is simply administrative, you should have brought the issue to their attention, asserted the importance/severity of the problem and allowed them to address your concerns in a more suitable and professional manner. By walking out, you have shown yourself to be completely self-serving and not particularly dependable (in their eyes). It does not matter that you had a good reason for walking out, they will only notice that when the chips were down you were not there. Contracting in this manner is a very political game, and in politics perception is everything. It doesn't matter what the truth is, it only matters how they perceive it. On the other hand, if your pay was botched because of financial difficulty, creative accounting or any number of other underhanded financial tricks that many companies pull just to avoid spending the extra dollar right this minute then you've made the absolutely right decision. This would not be a client you want to have. The problem will only continue and worsen until you are essentially working for free. I believe you should always value your work, and you should never let anyone take you for granted. The contracting game must be handled with a little more delicacy in demeanor though. Sometimes a little patience and understanding can breed a good business relationship, particularly if the company you're working for is made aware of the situation they've put you in and learned that you stuck with them through it. Think of it this way: How far would you go for someone who did that for you? All of that being said, if you're experiencing financial difficulty then contracting may not be the ideal option for you. You need to be able to budget these little fiascos into your life. Not everyone will be able to pay you immediately. They're not bad customers, they just don't have the luxury of paying right this minute. If you can't handle a 60-day swing in pay, you may not survive long as an independent contractor, and you may consider trying to latch onto a larger contracting firm that can broker jobs for you. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/56669",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/15177/"
]
} |
56,927 | I want to use SVG Edit for a project. This software is distributed under the Apache 2 license. I've seen that: all copies, modified or unmodified,
are accompanied by a copy of the
licence all modifications are clearly marked
as being the work of the modifier all notices of copyright, trademark
and patent rights are reproduced
accurately in distributed copies the licensee does not use any
trademarks that belong to the
licensor Do these pertain to the code or should I display the license somewhere in the GUI? The original software displays a "powered by SVG Edit", is it ok if I remove this? And most importantly: what is the correct etiquette for doing this? I don't want to be a jerk, but at the same time I want to simplify the UI as much as possible and removing the link will be part of it if it's not considered rude . | You do not need to display the license in the GUI, under any circumstances. For software licensed under the Apache License Version 2.0 (APLv2), it is quite okay to modify the software in the way that you suggest. That license encourages modification. The license assures your freedom to remove "powered by SVG Edit" in your modified version. However, see the APLv2 (section 4(b)) about your obligations regarding the NOTICES file that ships with the APLv2 software. You are required to display its contents in a way that is appropriate to the software. (Mind you, SVG Edit probably already does this.) You may not remove the "powered by SVG Edit" if it so happens that that comes from the NOTICES file. But, if you distribute the NOTICES file and the source code, then you are exempted from this. See section 4(b) of the APLv2 to better understand your options. In any case, what you want to do is not rude, especially if it makes your derivative work better. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/56927",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/9667/"
]
} |
56,935 | I read from somewhere that when using C++ it is recommended not to use pointers. Why is pointers such a bad idea when you are using C++. For C programmers that are used to using pointers, what is the better alternative and approach in C++? | I think they mean you should use smart pointers instead of regular pointers. In computer science, a smart pointer
is an abstract data type that
simulates a pointer while providing
additional features, such as automatic
garbage collection or bounds checking.
These additional features are intended
to reduce bugs caused by the misuse of
pointers while retaining efficiency.
Smart pointers typically keep track of
the objects they point to for the
purpose of memory management. The misuse of pointers is a major
source of bugs: the constant
allocation, deallocation and
referencing that must be performed by
a program written using pointers
introduces the risk that memory leaks
will occur. Smart pointers try to
prevent memory leaks by making the
resource deallocation automatic: when
the pointer (or the last in a series
of pointers) to an object is
destroyed, for example because it goes
out of scope, the pointed object is
destroyed too. In C++ the emphasis would be on garbage collection and preventing memory leaks (just to name two). Pointers are a fundamental part of the language, so not using them is pretty much impossible except in the most trival of programs. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/56935",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/9298/"
]
} |
56,965 | When is it okay to ship a product with a known bug? | It has to always be OK, because there is no such thing as bugless software. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/56965",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/19864/"
]
} |
57,047 | At my current job we have Low, Medium, High priority bugs. Low priority bugs are small errors that don't stop shipping or cause real trouble for any user. Medium priority bugs cause some internal users trouble but have known workarounds. High priority bugs are problems that our customers will see, can corrupt data, or crash a system. How to classify bug severity to complement our priority classification? | We classify our bugs and defects according to both their priority and severity. The priority level is an indication as to how urgent it is to fix/correct the problem (urgent, high, medium, low, none). The severity level, helps us identify how much or what kind of damage can be caused by the defect (dangerous/destructive, degraded and no workaround, affected but workaround exists, nuisance/cosmetic, no impact). Typically, the more dangerous and destructive the bug is, the higher the priority. However, it is not guaranteed. Consequently we can wind up with the occasional bug listed as dangerous and destructive, but due to the rarity of the situation, or the amount of change that may be required to fix it, its priority can in theory become quite low. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/57047",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/14392/"
]
} |
57,148 | We have many great tools which helps a lot when programming, such as good programmers text editors, IDEs, debuggers, version control systems etc. Some of the tools are more or less "must have" tools for getting the job done (e.g. compilers). There are still always tools which do help a lot, but still don't get so much attention, for various reasons, for instance, when they were released, they were ahead of their time and now are more or less forgotten. What type of programming tool do you think is the most underestimated one? Motivate your answer. | A rubber duck. Yes, really. http://en.wikipedia.org/wiki/Rubber_duck_debugging Rubber duck debugging , rubber ducking , and the rubber duckie test are informal terms used in software engineering to refer to a method of debugging code. The name is a reference to a likely apocryphal story in which an unnamed expert programmer would keep a rubber duck by his desk at all times, and debug his code by forcing himself to explain it, line-by-line, to the duck. To use this process, a programmer explains code to an inanimate object, such as a rubber duck, with the expectation that upon reaching a piece of incorrect code and trying to explain it, the programmer will notice the error. In describing what the code is supposed to do and observing what it actually does, any incongruity between these two becomes apparent... | {
"source": [
"https://softwareengineering.stackexchange.com/questions/57148",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/12750/"
]
} |
57,217 | After six months of development on a project, our stakeholders have had a "gut check" and have decided that the path that we've been walking (a custom designed application framework and data access layer) is holding us (the developers) back from quickly developing the features they would like to see. After several days of debate management and the development team have decided to scrap the current incarnation and start over using ASP.net MVC, with Entity Framework as the bases of the a 'quick and dirty', lets just get it done project. In days following, our senior developer who has never worked with MVC or Entity Framework has finally gotten into a sample project and done some work. His take on ASP.net MVC, "this is not software engineering". So my question is this; what do you do, when one doesn't think the code is complicated enough? | That's a ... problematic view. It's akin to someone saying, "I shall only use flint to start a fire." as you show them a lighter. The stakeholders are right: They want to see value. As programmers, we're there to provide value -- not reinvent the wheel so we can feel like we're 'roughing it'. They're paying your checks: If your complicated way isn't going fast enough, un-complicate it. There's enough complication in this world without developers adding more just to feel important. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/57217",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/7428/"
]
} |
57,294 | Would it be acceptable for some very generic utilities or classes to be added in the System namespace? I'm thinking of really basic stuff like a generic EventArgs ( EventArgs<T> ), Use case: would be shared in a company's core library (so that it can be recompiled in a new project as-is, without changing the namespace); | Not acceptable. Could you? Yes, but System namespace is for framework base stuff. Even Microsoft does not put their own stuff into the System namespace (for example: such as C# compiler APIs or Registry ). System namespace is shared between Microsoft .NET , Mono , and DotGNU . I would recommend using your company name, and then put the basic stuff in that namespace. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/57294",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/6770/"
]
} |
57,435 | I am a software developer, and have been asked to define a bonus structure for myself by recommending the metrics that will determine my bonus. I think, once I have defined this bonus structure, there is a decent chance that it will end up applying it to other members of my department. So, for personal and professional reasons, I want to try to get this right :) I am wondering, what do you guys think are fair and accurate measurements of a software developer's performance? And, if you are a developer or manager of developers, what metrics does your company use to measure developer performance? | First, watch this . Then, think about how you can create a bonus structure based on the findings of the research in that clip. :) TL;DW version : Traditional carrot and stick reward structures only work on purely mechanical labour. They don't work for knowledge workers. Really all you can do to motivate knowledge workers is give them interesting work and autonomy. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/57435",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/20051/"
]
} |
57,674 | Every once in a while on some technology websites a headline like this will pop up: http://www.osor.eu/news/nl-moving-to-open-source-would-save-government-one-to-four-billion My initial thought about government and organizations moving to open source software is that tons of programmers would lose their jobs and the industry would shrink. At the same time the proliferation and use of open source software seems to be greatly encouraged in many programming communities. Is my thinking that the full embrace of open source software everywhere will hurt the software industry a misconception? If it is not, then why do so many programmers love open source software? | Just because a project is open-source does not mean that programmers are not making a living off of it. Governments and companies donate large amounts of money to foundations like mozilla and apache. Also keep in mind, companies have to hire programmers to MODIFY the open source project to customize it for their business. Companies can't use off the shelf tools for everything. This is something that can't be done with closed-source software so it's an example of how you can open up new opportunities for programming. It's not about eliminating programmers or not paying them, it's about rearranging the structure to hopefully make things more efficient so we have more time for NEW projects. Another thing to realize about open source is that you don't necessarily have to reveal the source code of your program unless you're going to distribute the program. For programs that a company is going to use for itself in its servers or intracompany needs, it will probably NOT distribute and therefore not have to reveal the source code for the modified program. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/57674",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/10922/"
]
} |
57,698 | I came across the Ur/Web project during my search for web frameworks for Haskell-like languages. It looks like a very interesting project done by one person. Basically, it is a domain-specific purely functional language for web programming, taking the best of ML and Haskell. The syntax is ML, but there are type classes and monad from Haskell, and it's strictly evaluated. Server-side is compiled to native code, client to Javascript. See the slides and FAQ page for other advertised advantages. Looking at the demos and their source code , I think the project is very promising. The latest version is something 20110123, so it seems to be under active development at this time. Has anybody here had any further experience with it? Are there problems/annoyances compared to Haskell, apart from ML's slightly more verbose syntax? | I'm the author of Ur/Web. I just created this account and so don't have enough mojo to respond to other past responses. Ur/Web allows plugins to implement different web protocols, so, if you want to see some other protocol besides CGI, FastCGI, or HTTP, you may be able to implement it, or ask me to implement it. :) I genuinely haven't been aware to this point of any alternative folks were looking for. What does it mean for SQL programming to feel "bunched up"? Re: complaints about "Web 1.0" look, I think of that as a feature designed to save time for people who don't really want to be using this language. ;) There are no missing features that I'm aware of which prevent writing applications that look however you like, and I believe this is apparent once you grok the basic set-up of the language and libraries. Finally, ScantRoger, I'd love to hear about your experiences applying Ur/Web with a client! I don't know if it would be bad form to give my contact information here, but there's a link to my personal web site at the bottom of the Ur front page. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/57698",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/19588/"
]
} |
57,885 | I'm trying to guage how popular and how used LINQPad is today. I'm just wondering if it's still a useful tool or not as VS and other tools have gotten better. Furthermore, I am coding over LLBGen by working with LINQ to SQL. I see there is a plug-in for LLBGen and LINQPad . Still I wonder if LINQPad is really worth it or what benefits it can give me or if it's still highly suggested out there for ORMs, etc. | I'm just wondering if it's still a
useful tool ABSOLUTELY! Use it more days than not. A lot of times, I find trying a little snippet out in LinqPad quicker than reading a doc (i.e., today I wanted to know what Exceptions would be thrown by a framework method under various inputs - LinqPad answered that very quickly). | {
"source": [
"https://softwareengineering.stackexchange.com/questions/57885",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/13103/"
]
} |
58,038 | On one side, I just want to get a degree with a 3.0 GPA. On the other side, my parents want more than just a 3. Now here's the thing. I program with a passion. I spend day and night programming. And I ace all my programming courses. However, I do terrible on all my elective courses -- such as writing, history, and all that stuff -- which only leaves me with a 3.1 to 3.2 GPA. And my parents want more. They think that university is like high school, where you need super-stellar grades to get to the next level. But they don't realize that good enough grades will land me a job. And they don't realize that a programmer needs to practice to become good at programming, and that having good skills is what will land a job in a nice software development company. Thankfully, though, they don't threaten to beat me with a baseball bat or anything like that. They just occasionally give me the little "tsk-tsk". But even that little "tsk-tsk" makes me feel guilty for opening up an IDE. And on top of that, I procrastinate because of that feeling of guilt. So now, I want to come clean with them. I want to know what's a good way to do that. [Edit] OK, so now, I realized, I should aim for higher grades, as some have suggested below. | Ok, I was the same way. All I wanted to do was program, and I was doing ok in my other classes so I didn't care. However, to get your choice of jobs you should do as well as you can. If you have a specific field you want to get in to, they will be looking for the best students. Studying hard and getting good grades even in subjects that don't matter to your career shows diligence. This work ethic will translate into strong performance at a job, since you've learned to discipline yourself. Employers don't want people who float. They want employees who in the time they work for the company will work hard and get things done! The only indicator they have of this in new graduates really is their GPA. High CS grades and lower other grades tend to indicate that the person only works hard on things they like. An "average" IT programmer doesn't always get to do fun stuff. For example, I don't like to deal with databases, but my current job often requires me to hunt down discrepancies in the database. It's not a fun job; I'd much rather be bug hunting or coding new apps. But it has to be done, and done just as well as your favorite work! I'd encourage you to set your sights high. Do your best, and that will help ensure a great first job and set the tone for a solid career. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/58038",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/17183/"
]
} |
58,174 | Every time I install VS (whichever version going back years) it installs with the Solution Explorer on the right. Now as most UIs have the navigation in a left hand column (and at the top of the viewport) and the content to the right of this navigation this always seems wrong to me. So I drag the solution explorer to the left of the screen and dock it there. But I've never seen another developer do this. Considering how most programmers usually like to customize their environment, adding their favourite text editor, browser, plug-ins, greasemonkey scripts etc why do Visual Studio developers never seem to make this simple UI change? Does anyone else do this or am I just screaming in the dark? | User Interface design is not a pure science, primarily because people's preferences are different. However, there are a few principles that we've learned over the years: The eye naturally gravitates to "power points". i.e. in art it is the golden rule, and photography it is simplified to "the rule of thirds". In essence if you drew a grid on your screen three cells across and three cells down, the points where the lines intersect are the power points. These are very important real estate, and it also explains why the 1/3-2/3 split works so well. We've learned that there is an order of importance when we learn to read. In short, the most important column on a the screen is the one that comes first in reading order. For us western hemisphere folks, that means the left (left-to-right reading order). For folks in the middle east and some far eastern countries that means the right (right-to-left reading order). For other folks in other far eastern countries that means the top (top-to-bottom, typically right-to-left reading order). Using these two principles, we can organize the screen in a way that users can get the most out of it. The MS Visual Studio developers surmised that the source code is the most important element, and the other panels support that content. Now, if you have a preference to have the navigation on the left, it is because you place a different value on the importance of the navigation than the VS developers. Neither position is right or wrong. If you find yourself jumping from file to file often, it can be handy to have the navigation on the left. You'll notice that even in this site, the content is on the left and the navigation and support information is on the right. This echoes what the designers felt were the most important aspects of the site. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/58174",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/6432/"
]
} |
58,186 | Nowadays we have a lot of programming aids that make work easier, including: IDEs Debuggers (line by line, breakpoints, etc) Ant scripts, etc for compiling Sites like StackOverflow to help if you're stuck on a programming problem 20 years ago, none of these things were around. Which tools did people use to program, and how did they make do without these newer tools? I'm interested in learning more about how programming was done back then. | 20 years ago... 1991... Let's see. I was using SunOS and VAX VMS. We wrote code using text editors (vi or edit). I -- personally -- don't use debuggers and never did. Some folks used the adb debugger on SunOS. I actually used it a few times to recover a stack traceback from a core dump file. I have no idea what was available on VAX VMS. I used print statements in the code. We used make for compiling. We read the paper documentation, thought and ran experiments. Indeed, that still works. Stack Overflow is overused by a few people who -- for inexplicable reasons -- refuse to run experiments or think. 30 years ago... 1981... Let's see. I was using Univac Exec 8 and IBM OS. We wrote code using text editors (I can't recall the Univac one, but the IBM one was the TSO environment's editor) I -- personally -- don't use debuggers and never did. Those machines were "mainframes" and could not be single-stepped through anything. There was no "debugger". You had to insert print statements in your code. We wrote scripts for compiling. We read the paper documentation, thought and ran experiments. 40 years ago... 1971... Let's see. I was using an IBM 1620 that had no OS. We wrote code using punched paper cards. Debugging meant single-stepping the processor. It was rarely helpful, so I learned to insert "print" statements in my code. We run the compiler by hand to produce a deck of punched paper cards which we then ran. "by hand" meant literally loading cards into a card reader to install the compiler or assembler. Then loading the source code into a card reader to produce object code. Then loading the resulting object code into the card reader to run the program. We read the paper documentation, thought and ran experiments. "Get Off My Lawn You Rotten Kids" IDEs. Almost useless. Code completion can be fun, but not as helpful as some folks claim. I've had folks tell me that VB is an acceptable language because of Visual Studio. Syntax coloring is perhaps the most useful feature ever invented. The rest should be optional add-ons, so we can dispense with them and free up memory and processor cycles. As crutches go, there are worse things to depend on. Debuggers. Useless. Except when the language definition is so bad that the semantics are so murky that you cannot understand what was supposed to happen. For example, VB. When a debugger is necessary, it's really time to get a better language. Based on my experience teaching programming, debuggers can be unhelpful. For some people, they lead to clouded thinking and a weird empirical style of programming where there's no semantic significance to the code -- no meaning -- just pure hackery. Ant scripts, etc for compiling. Incremental compilation and linking isn't really all that great an idea. With hyper-complex languages, it's a necessary hack, but really needs to be seen as a hack. It's not necessary or even desirable. A better language with less reliance on incremental compilation seems like a far, far better thing than sophisticated Ant scripts. Sites like Stackoverflow to help if you're too stuck on a bug. Sometimes helpful. As with debuggers, there's a possibility that some folks will appear to be successful through simple blundering luck. That's a bad thing. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/58186",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/9803/"
]
} |
58,216 | On Stack Overflow, I see a lot of PHP code in questions and answers that have MySQL queries that are highly vulnerable to SQL injection attacks, despite basic workarounds being widely available for more than a decade. Is there a reason why these types of code snippets are still in use today? | I think it's mostly due to a) ignorance b) laziness. Beginners usually don't know much about sql injection, and even when they hear about it, they ignore it because it's so much simpler and easier to code that way. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/58216",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/17853/"
]
} |
58,237 | My boss keeps mentioning nonchalantly that bad programmers use break and continue in loops. I use them all the time because they make sense; let me show you the inspiration: function verify(object) {
if (object->value < 0) return false;
if (object->value > object->max_value) return false;
if (object->name == "") return false;
...
} The point here is that first the function checks that the conditions are correct, then executes the actual functionality. IMO same applies with loops: while (primary_condition) {
if (loop_count > 1000) break;
if (time_exect > 3600) break;
if (this->data == "undefined") continue;
if (this->skip == true) continue;
...
} I think this makes it easier to read & debug; but I also don't see a downside. | When used at the start of a block, as first checks made, they act like preconditions, so it's good. When used in the middle of the block, with some code around, they act like hidden traps, so it's bad. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/58237",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/8382/"
]
} |
58,303 | This question has been cooking in my head for a while so I wanted to ask those who are following agile/scrum practices in their development environments. My company has finally ventured into incorporating agile practices and has started out with a team of four developers in an agile group on a trial basis. It has been four months with three iterations, and they continue to do it without going fully agile for the rest of us. This is due to the fact that middle management is entrusted to meet business requirements with quite a bit of ad hoc requests from upper management. Recently, I talked to the developers who are part of this initiative; they tell me that it's not fun. They are not allowed to talk to other developers. Their Scrum master enforces this restriction. And they are not allowed to take any non-business-related phone calls in the work area. For example, if I want to talk to my friend, who is in the agile team, just for kicks -- I am not allowed without the approval of the Scrum master; who is sitting right next to the agile team. The idea of all this (or the agile, in general) is to provide a complete vacuum for agile developers from any interruptions, and to have them put in good 6+ productive hours. Well, guys, I am no agile guru but what I have read Yahoo agile rollout document and similar for other organizations, it gives me a feeling that agile is not cheap. It require resources and budget to instill agile into the teams and correct issue as they arrive to put them back on track. For starters, it requires training for developers and coaching for managers and etc, etc... The current Scrum master was a manager who took a couple days agile training class paid by the management is now leading this agile team. I have also heard in the meeting that agile manifesto doesn't dictate that agile is not set in stone, and is customized differently for each company. Well, it all sounds good and reasonable. In conclusion, I always thought the agile was supposed to bring harmony in the development teams which results in happy developers. However, I am getting the very opposite feeling when talking to the developers in the agile team. They are unhappy that they cannot talk about anything but work, sitting quietly all day just working, and they feel it's just another way for management to make them work more. Tell me please, if this is one of the examples of good practices used for the purpose of selfish advantage for more dollars? Or maybe it's just us, the developers like me and this agile team, which feels that they don't like to work in an environment where they only breathe work while they're at work. It's a company in the healthcare domain that has offices across the US. It definitely feels like a cowboy style agile, which makes me really not want to go for agile at all, especially at my current company. All of it has to do with the management being completely cheap. Cutting out expensive coffee for a cheaper version, emphasis on savings and being productive while staying as lean as possible. My feeling is that someone in the management behind closed doors threw out this idea: agile makes you produce more, so we can show our bosses we're producing more with the same headcount. Or, maybe, it will allow us to reduce headcount if that's the case. They are having their 5 min daily meeting. But not allowed to chat or talk with someone outside of their team. All focus is on work. | You're describing managerial dictatorship, not agile. Agile is about incremental development in a field of changing requirements, not about dictating people how they individually go about doing their work. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/58303",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/20209/"
]
} |
58,501 | My employer recently posted an opening for a C# Developer with 3-5 years of experience. The requirements and expectations for the position were fair, up until the criteria for salary determination. It was stated clearly that compensation would depend ONLY on experience with C#, and that years of programming experience with other languages & frameworks would be considered irrelevant and not factored in. I brought up my concern with HR that good candidates would see this as a red flag and steer away. I attempted to explain that software development is about much more than specific languages, and that paying someone for their experience in a single language is a very shortsighted approach to hiring good developers (I'm telling this to the HR dept of a software company). The response: "We are tired of wasting time interviewing developers who expect 'big salaries' because they have lots of additional programming experience in languages other than what we require."
The #1 issue here is that 'big salaries' = Market Rate. After some serious discussion, they essentially admitted that nobody at the company is paid near market rate for their skills, and there's nothing that can be done about it. The C-suite has the mentality that employees should only be paid for skills proven over years under their watch. Entry-level developers are picked up for less than $38K and may reach 50K after 3 years, which I'm assuming is around what they plan on offering candidates for the C# position. Another interesting discovery (not as relevant) - people 'promoted' to higher responsibilities do not get raises. The 'promotion' is considered an adjustment of the individuals' roles to better suit their 'strengths', which is what they're already being paid for. After hearing these hard truths straight from HR, I would assume that most people who are looking out for themselves would quickly begin searching for a new employer that has a better idea of what they're doing in the industry (this company fails in many other ways, but I don't want to write a book). Here is my dilemma however: This is the first official software development position I've held, for barely 1 year now. My previous position of 3 years was with a very small company where I performed many duties, among them software development (not in my official job description, but I tried very hard to make it so). I've identified local openings that I'm currently qualified for, most paying at least 50% more than I'm getting now. Question is, is it too soon for a jump? I am getting valuable experience in my current position, with no shortage of exciting projects. The work environment is very comfortable, and I'm told by many that I'm in the spotlight of the C-level guys for the stuff that I've been able to accomplish during my short time (for what that's worth). However, there is a clear opportunity cost to staying, knowing now with certainty that I will have to wait 3-5 years only to be capped at what I could potentially be earning elsewhere this year. I am also aware that 'job hopper' is a dangerous label to have, regardless of the reasons. UPDATE: I've just accepted an offer at another company, paying significantly more and with even cooler projects. Thanks to all for the insightful responses. | Companies that don't value retention don't offer competitive compensation. They also tend to get what they deserve, as they tend to attract developers with fewer options. Sometimes that just means people with poor negotiating or people skills, but it often results in technological morasses because there's no one with a broader range of experience to use as a sounding board for design and implementation ideas. I stayed at the same company for 7 years, but, in the last few years, I've also moved around thanks in part to initially focusing on contract jobs and later thanks to economic challenges faced by my employers. I chose to leave my previous employer when it became clear that the company was collapsing, and I learned that it was actually a Ponzi scheme around the time I tendered my resignation. The short time at my previous couple of jobs raised some eyebrows in interviews, but you don't need to be negative in interviews when you're looking for something else. If you're asked why you're motivated to leave, I would hope it's because the other company offers a more interesting project/technology/challenge, opportunities to learn from more experienced/talented people and a more compelling compensation package; you can say any of those things without making you or your current employer look bad. If you're reasonably competent, you will have choices. You should always be open to exploring options for your next job, even if you're reasonably content in your current position, because it will give you perspective on your options for career development and it will allow you to have more control over your future, since you'll spend more time choosing your employers rather than the other way around. Anyway, your employer is wrong. Experience in "other languages" is only a small part of what's valuable in an experienced developer. Battle scars, experience building and maintaining complex systems in sustainable ways, and experience juggling the needs of the business and the technical debt are what make experienced developers valuable. My junior coworkers can churn out a lot of code in a short amount of time, but they often go and solve the wrong problem in an unmaintainable way. Ask me which is more valuable, and I'll tell you we need both senior engineers who can think in nuanced ways and optimistic junior developers that want to get new things built as quickly as possible even if we get some of it wrong the first time. But experience is valuable, because it keeps you from spending too much time generating technical debt. I know plenty of people with 3 years experience in C# that still produce crappy, unmaintainable, unidiomatic C# code, and I know a long-time Java developer that took about 6 weeks to start producing high quality C# code that took advantage of the language idioms and was loosely coupled thanks to a combination of experience, inquisitiveness and code review. If you have an employer that doesn't get that there's a difference, yes, it is a good idea to look for an opportunity for growth elsewhere. You should always want to be working with a company that sees further into the future than you do and hires smarter people than you. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/58501",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/19367/"
]
} |
58,630 | I know the title of the question is very subjective, but I was confronted with usage of ?? operator by my peers, where at the same time I was not very happy/comfortable with applying var in new up-coming code. The argument given for using ?? operator was, it takes away readability in code. My question is, doesn't the same thing happens when you start using var ? | Null coalescing operator (??) Personally, I don't see any downsides to using this operator. Consider the following three code samples, from 'easy' to 'complex' new operators. Without magic: bool isNameSet = false;
string name;
if ( isNameSet )
{
Console.WriteLine( name );
}
else
{
Console.WriteLine( "No name set." );
} Ternary operator: bool isNameSet = false;
string name;
Console.WriteLine( isNameSet ? name : "No name set." ); Null coalescing: string name = null;
Console.WriteLine( name ?? "No name set." ); The reason these operators were invented is because they represent very common programming operations . Not wanting to use them because you aren't used to them is just being stubborn . Languages evolve, features evolve, learn to use them! var keyword I have a somewhat different opinion about the var keyword. The type of a variable often gives extra information about your code. I find hiding the type by using the var keyword sometimes makes code less readable. You know less what to expect without using auto completion, or hovering over the identifiers to see what they actually are. In my opinion, this results in code which is slower to read/write. I do use the keyword when I find that the type doesn't give much extra information. Mainly in foreach loops , which I learned from Resharper as it is a setting there. Most of the time, you know what type of collection you are traversing, so you know you are expecting items from within that collection. Linq queries . The result of linq queries are often very complex generic types. Displaying this type does more harm than good. Long typenames which are simply initialized with their constructor. You can already tell what the type is by looking at the constructor. As an example for the last statement: ThisIsSomeSpecializedTypeRightHere duplication =
new ThisIsSomeSpecializedTypeRightHere();
var justAsReadable =
new ThisIsSomeSpecializedTypeRightHere(); // Less duplication.
// But I still prefer the following ...
int number = 9;
SomeCreatedType foo = Factory.CreateSomeType(); | {
"source": [
"https://softwareengineering.stackexchange.com/questions/58630",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/14772/"
]
} |
58,688 | Which of these designs is better? What are the pros and cons of each? Which one would you use? Any other suggestions of how to deal with methods like is are appreciated. It is reasonable to assume that Draw() is the only place that the other draw methods are called from. This needs to expand to many more Draw* methods and Show* properties, not just the three shown here. public void Draw()
{
if (ShowAxis)
{
DrawAxis();
}
if (ShowLegend)
{
DrawLegend();
}
if (ShowPoints && Points.Count > 0)
{
DrawPoints();
}
}
private void DrawAxis()
{
// Draw things.
}
private void DrawLegend()
{
// Draw things.
}
private void DrawPoints()
{
// Draw things.
} Or public void Draw()
{
DrawAxis();
DrawLegend();
DrawPoints();
}
private void DrawAxis()
{
if (!ShowAxis)
{
return;
}
// Draw things.
}
private void DrawLegend()
{
if (!ShowLegend)
{
return;
}
// Draw things.
}
private void DrawPoints()
{
if (!ShowPoints || Points.Count <= 0))
{
return;
}
// Draw things.
} | I don't think you can have a blanket rule for this sort of thing, it depends on the situation. In this case, I would suggest having the if clauses outside the methods because the names of the Draw methods imply that they just draw the things without any special conditions. If you find that you have to make the checks before calling the methods in many places, then you might want to put the check inside the methods and rename them to clarify that this is happening | {
"source": [
"https://softwareengineering.stackexchange.com/questions/58688",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/8675/"
]
} |
58,726 | Playing with icescrum , I realised that I do not understand the difference between user stories and user features. Can someone explain the difference? | A feature is a distinct element of functionality which can provide capabilities to the business. A story is a small aspect of a feature which you can use to get feedback from your stakeholders and find out if you're doing anything wrong. For instance, a feature might be "allow users to comment on articles". The stories associated with that feature might then be: save comments filter comments for rude words limit comments to 400 characters and feed back to users add captchas to stop bots spamming the site allow users to log in via Google id etc. At each stage we can then get feedback as to whether the direction we're taking is useful. Some teams don't bother splitting features into stories. That's OK. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/58726",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/20065/"
]
} |
59,091 | I have a degree in computer science. It has been great for opening doors, getting a job. As far as helping me in the professional field of C# .NET programming (the most popular platform and language in the area I work if not the entire united states on hands down the most popular OS in the world) its hardly useful. Why do you think it helps you as a programmer in your professional career (outside spouting off to prims algorithm to impress some interviewer)? In today's world adaptation, a quick mind, strong communication, OO and fundamental design skills enable a developer to write software that a customer will accept. These skills are only skimmed over in the cs program. In my mind, reading a 500 page C# book by Wrox offers far more useable a skillset than 4 years of the comp sci math blaster courses. Many disagree. So, why does a computer science degree matter? | Why a computer science degree?: I worked with a developer who stored thousands of items in a HashTable and then only iterated through the values. He never accessed through a hash. He obviously didn't know how a HashTable worked or why you would use one - a CS degree might help with that. When working with regular expressions, it seems easier for people with exposure to basic automata theory and formal languages to reason about what's going on and troubleshoot their expressions - a CS degree might help with that. A developer fresh from school may be able to decompose problems in various paradigm mindsets (OO, functional, logical) immediately, while a new non-degree developer needs experience before they can do the same. Schools teach computational complexity. Non-degree developers may feel what's best but a formal understanding is sometimes nice, especially when explaining results to a colleague. A degree offers an introduction to many models of the machine - hardware, OS, common data structures, networking, VMs. With these models in the back of your mind, it's easier to develop a hunch where a problem lives when something goes wrong. Again, non-degree developers build the same models but it takes time. Expert guidance through any discipline may help the learner avoid dead-ends and missed topics. Reading is great but it's no substitute for a great teacher. This is not to say that a CS degree is necessary to be a great developer. Hardly. Some of the best developers I've worked with have no degree. A degree gives you a running start. By the time you graduate, you've (hopefully) written a fair amount of code in various languages and environments to solve many types of problems. This puts you well on your way to the 10,000 hours required to be an expert. A second benefit is that it shows employers you're able to commit to a long-term goal and succeed. In many companies, I believe that's more important than what you learned. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/59091",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/11107/"
]
} |
59,173 | Who wants to work in a fast-paced environment? Not me! I want a civilized environment where people have a sense of balance. Higher quality work gets done that way and work life isn't full of stress and anguish. | It's code for "We change our minds a lot about what we want from the software, and if we hire you we don't want you to complain about it. In fact, we expect you to put in a lot of overtime to implement our latest whim decision because we're fast paced . You've been warned." In programmer-speak, it means "we have no specs, unit tests, or for that matter, anyone still around who remembers why our software is the way it is." | {
"source": [
"https://softwareengineering.stackexchange.com/questions/59173",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/18957/"
]
} |
59,186 | We are currently hiring a junior developer to help me out, as I have more projects than I can currently manage. I have never hired anyone who wasn't a friend or at least an acquaintance. I have a phone interview with the only applicant that actually stood out to me (on paper), but I have never done this before. Our projects are all high scalability, data intensive web applications that process millions of transactions an hour, across multiple servers and clients. To be language/stack specific, we use ASP.Net MVC2, WebForms and C# 4, MSSQL 2008 R2, all running atop Windows Server 2008 R2 What should I ask him? How should I structure the phone call? | Ask about what tech blogs they read, ask what the applicant finds interesting in current tech and why. Essentially, for a phone interview you want to figure out if this is someone who is enthusiastic about technology and programming and is interested in learning and knowing more. Since this is a junior, you can't expect that they know many advanced topics, but you want to be sure they can think like a programmer - give them a simple problem and have them walk you through how they would solve it. It will give you insight into how they think and solve problems. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/59186",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/20402/"
]
} |
59,344 | I want to understand the correct use and implementation of hash tables in php (sorry). I read somewhere that an in-experienced programmer created a hash table and then iterated through it. Now, I understand why that is wrong but I haven't quite got the full knowledge to know if my understanding is correct (if you know what I mean). So could someone explain to me how to implement a hash table in php (presumably an associative array) and perhaps more importantly, how to access the values 'with a hash' and what that actually means? | Simple Hash Table Overview As a refresher, a hash table is a way to store a value under a specific key in a data structure. For instance, I could store value "a" under the key 1 , and then later retrieve it by looking up the key 1 in the hash table. The simplest example of a hash table that I can think of off the top of my head is a hash table that can only store integers, where the key for the hash table entry is also the value being stored. Let's say your table is of size 8, and it's basically an array in memory: ---------------------------------
| | | | | | | | |
---------------------------------
0 1 2 3 4 5 6 7 Hash Function Hash functions give you an index on where to store your value. A pretty simple hash function for this table would be to add 1 to the value you want to store, and then mod it by 8 (the table size). In other words, your hash function is (n+1)%8 , where n is the integer you want to store. Inserts If you want to insert a value into this hash table, you call your hash function (in this case (n+1)%8 ) on the value you want to insert to give you an index. For instance, if we want to insert 14, we would call (14 + 1) % 8 and get index 7 , so we'd insert it the value in index 7 . ---------------------------------
| | | | | | | |14 |
---------------------------------
0 1 2 3 4 5 6 7 Similarly, we can insert 33, 82, and 191 like so: ---------------------------------
|191| |33 |82 | | | |14 |
---------------------------------
0 1 2 3 4 5 6 7 Collisions But what happens if we try to insert something that would collide with an entry? 2 should go in index 3 , but it is taken by 82. There are multiple ways to solve this issue, the simplest is to call our hash function again and again repeatedly until we find an empty space. So the logic is as follows: (2+1)%8 = 3 Index 3 is full Plug 3 back into our hash function. ( 3 + 1) % 8 = 4 , which is empty. Place our value into index 4 . Now the hash table looks like this, with the value 2 stored at index 4 . ---------------------------------
|191| |33 |82 |2 | | |14 |
---------------------------------
0 1 2 3 4 5 6 7 The downside with this solution is that pretty soon, our table will get full! If you know that your data size is limited, this shouldn't be an issue as long as your table is large enough to hold all possible values. If you want to be able to hold more, you can handle collisions differently. Let's move back to where we were before inserting 2. ---------------------------------
|191| |33 |82 | | | |14 |
---------------------------------
0 1 2 3 4 5 6 7 If you recall, (2+1)%8 gives us index 3 , which is taken. If you don't want your hash table to fill up, you can use each table index as a linked-list, and append to the list at that index. So instead of calling the hash function again, we'll simply append to the list at index 3 : -----
| 2 |
---------------------------------
|191| |33 |82 | | | |14 |
---------------------------------
0 1 2 3 4 5 6 7 This list can then grow as much as memory will allow. I can insert 18, and it will just be appended to 2: -----
|18 |
-----
| 2 |
---------------------------------
|191| |33 |82 | | | |14 |
---------------------------------
0 1 2 3 4 5 6 7 Lookups Lookup up values in your hash table is quick, given that your hash table is of a pretty large size. You simply call your hash function, and get the index. Let's say you want to see if 82 is in your table. The lookup function would call (82+1)%8 = 3 , and look at the item in index 3 , and return it for you. If you looked up 16, the lookup function would look in index 1 , and see that it does not exist. Lookups Need to Handle Collisions, too! If you try to look up the value 2, your hash table would have to use the same collision logic it used for storing the data as for retrieving the data. Depending on the way your hash table works, you would either hash the key over and over until you find the entry you are looking for (or find a blank space), or you would iterate through your linked list until you found the item (or got to the end of the list) Summary So, hash tables are a good way to store and access key-value pairs quickly. In this example we used the same key as the value, but in real world hash tables the keys aren't so limited. Hash functions will work on the keys to generate an index, and then the key/value can be stored at that index. Hash tables aren't really meant to be iterated through, although it's possible to do so. As you can see, hash tables can have lots of blank spaces, and iterating through them would be a waste of time. Even if the hash table has logic for skipping blank space lookups in its iterator, you would be better suited using a data structure that is designed for iterators, like linked lists. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/59344",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/18519/"
]
} |
59,387 | One can often hear that OOP naturally corresponds to the way people think about the world. But I would strongly disagree with this statement: We (or at least I) conceptualize the world in terms of relationships between things we encounter, but the focus of OOP is designing individual classes and their hierarchies. Note that, in everyday life, relationships and actions exist mostly between objects that would have been instances of unrelated classes in OOP. Examples of such relationships are: "my screen is on top of the table"; "I (a human being) am sitting on a chair"; "a car is on the road"; "I am typing on the keyboard"; "the coffee machine boils water", "the text is shown in the terminal window." We think in terms of bivalent (sometimes trivalent, as, for example in, "I gave you flowers") verbs where the verb is the action (relation) that operates on two objects to produce some result/action. The focus is on action, and the two (or three) [grammatical] objects have equal importance. Contrast that with OOP where you first have to find one object (noun) and tell it to perform some action on another object. The way of thinking is shifted from actions/verbs operating on nouns to nouns operating on nouns -- it is as if everything is being said in passive or reflexive voice, e.g., "the text is being shown by the terminal window". Or maybe "the text draws itself on the terminal window". Not only is the focus shifted to nouns, but one of the nouns (let's call it grammatical subject) is given higher "importance" than the other (grammatical object). Thus one must decide whether one will say terminalWindow.show(someText) or someText.show(terminalWindow). But why burden people with such trivial decisions with no operational consequences when one really means show(terminalWindow, someText)? [Consequences are operationally insignificant -- in both cases the text is shown on the terminal window -- but can be very serious in the design of class hierarchies and a "wrong" choice can lead to convoluted and hard to maintain code.] I would therefore argue that the mainstream way of doing OOP (class-based, single-dispatch) is hard because it IS UNNATURAL and does not correspond to how humans think about the world. Generic methods from CLOS are closer to my way of thinking, but, alas, this is not widespread approach. Given these problems, how/why did it happen that the currently mainstream way of doing OOP became so popular? And what, if anything, can be done to dethrone it? | OOP is unnatural for some problems. So's procedural. So's functional. I think OOP has two problems that really make it seem hard. Some people act like it's the One True Way to program and all other paradigms are wrong. IMHO everyone should use a multiparadigm language and chose the best paradigm for the subproblem they're currently working on. Some parts of your code will have an OO style. Some will be functional. Some will have a straight procedural style. With experience it becomes obvious what paradigm is best for what: a. OO is generally best for when you have behaviors that are strongly coupled to the state they operate on, and the exact nature of the state is an implementation detail, but the existence of it cannot easily be abstracted away. Example: Collections classes. b. Procedural is best for when you have a bunch of behaviors that are not strongly coupled to any particular data. For example, maybe they operate on primitive data types. It's easiest to think of the behavior and the data as separate entities here. Example: Numerics code. c. Functional is best when you have something that's fairly easy to write declaratively such that the existence of any state at all is an implementation detail that can be easily abstracted away. Example: Map/Reduce parallelism. OOP generally shows its usefulness on large projects where having well-encapsulated pieces of code is really necessary. This doesn't happen too much in beginner projects. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/59387",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/6564/"
]
} |
59,520 | I'm looking for some small programming projects that I can give potential employees to gauge their programming abilities. These will be programmers straight out of college. I'm looking for projects that would take someone a couple of hours and they would email back their answers post-interview. One example would be to take this paragraph of text and return a list of alphabetized unique words. After each word tell me how many times the word appeared and in what sentance(s) the word appreared in. Anyone have any good suggestions? | I've long-since concluded that nothing someone can do in a short time can tell me anything useful about that person. But every good candidate has personal projects already written which can tell you a lot. So I've replaced specific challenges with "give me a piece of code which you're proud of and happy to stamp your name to." Their choice of project tells you more than any hour-long task. And then you can spend an hour discussing it to learn even more. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/59520",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/4759/"
]
} |
59,606 | I began coding in in Python primarily where there is no type safety, then moved to C# and Java where there is. I found that I could work a bit more quickly and with less headaches in Python, but then again, my C# and Java apps are at much higher level of complexity so I have never given Python a true stress test I suppose. The Java and C# camps make it sound like without the type safety in place, most people would be running into all sorts of horrible bugs left an right and it would be more trouble than its worth. This is not a language comparison, so please do not address issues like compiled vs interpreted. Is type safety worth the hit to speed of development and flexibilty? WHY? to the people who wanted an example of the opinion that dynamic typing is faster: "Use a dynamically typed language during development. It gives you faster feedback, turn-around time, and development speed." - http://blog.jayway.com/2010/04/14/static-typing-is-the-root-of-all-evil/ | It's sort of a myth that programmers don't have to worry about types in dynamically typed languages. In dynamically typed languages: You still have to know if you're working with an array, an integer, a string, a hash table, a function reference, a dictionary, an object, or whatever. If it's an object, you have to know what class it belongs to. Assigning one of these types to a variable or function parameter expected to be another type is almost always an error. At a lower level, things like number of bits or signed versus unsigned frequently still must be accounted for if you are populating a TCP packet, for example. You can run into problems where you get a zero where you really wanted an empty string. In other words, you're still debugging type mismatch bugs. The only real difference is the compiler isn't catching the errors. I'd argue that you aren't even saving much typing - , because you tend to want to document in comments what type your function parameters are instead of documenting it in your code. This is why doxygen-style comment blocks are much more popular in practice throughout dynamically typed code, where in statically typed languages you mostly only see them for libraries. That's not to say that programming in dynamically typed languages doesn't feel more pleasant because the compiler isn't always on your back, and experienced programmers don't tend to have difficulty finding and correcting the kind of bugs that static typing would catch anyway, but that's a completely separate issue from an alleged increase in efficiency or reduction in bug rate, for which dynamic typing is at best even with static typing. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/59606",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/3792/"
]
} |
59,810 | While writing the code or during design do you try to generalize the problem at the first instance itself or try to solve that very specific problem. I am asking this because trying to generalize the problem tends to complicate the things (which may not be necessary) and on the other hand it will be very difficult to extend the specific solution if there is a change in the requirement. I guess the solution is to find the middle path which is easier said than done. How do you tackle this type of problem ? If you start generalizing it at what point of time you know that this much of generalization is sufficient? | Too often when you try to design for the future, your predictions about future needs turn out to be wrong. It's usually better to refactor when you actually know how the needs have changed than to overdesign your system on day one. At the same time, don't shoot yourself in the foot, either. There's certainly a middle ground, and knowing where that is is more art than science. To boil it down to one rule of thumb: less is more. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/59810",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/123/"
]
} |
59,880 | I've read that I should avoid the postfix increment operator because of performance reasons (in certain cases). But doesn't this affect code readability? In my opinion: for(int i = 0; i < 42; i++);
/* i will never equal 42! */ Looks better than: for(int i = 0; i < 42; ++i);
/* i will never equal 42! */ But this is probably just out of habit. Admittedly, I haven't seen many use ++i . Is the performance that bad to sacrifice readability, in this case? Or am I just blind, and ++i is more readable than i++ ? | The facts: i++ and ++i are equally easy to read. You don't like one because you're not used to it, but there's essentially nothing you can misinterpret it as, so it's no more work to read or write. In at least some cases, the postfix operator will be less efficient. However, in 99.99% cases, it won't matter because (a) it'll be acting on a simple or primitive type anyway and it's only a problem if it's copying a big object (b) it won't be in a performance critical part of code (c) you don't know if the compiler will optimise it or not, it may do. Thus, I suggest using prefix unless you specifically need postfix is a good habit to get into, just because (a) it's a good habit to be precise with other things and (b) once in a blue moon you'll intend to use postfix and get it the wrong way round: if you always write what you mean, that's less likely. There is always a trade-off between performance and optimisation. You should use you common sense and not micro-optimise until you need to, but neither be flagrantly inefficient for the sake of it. Typically this means: first, rule out any code construction which is unacceptably inefficient even in non-time-critical code (normally something representing a fundamental conceptual error, like passing 500MB objects by value for no reason); and second, of every other way of writing the code, choose the clearest. However, here, I believe the answer is simple: I believe writing prefix unless you specifically need postfix is (a) very marginally clearer and (b) very marginally more likely to be more efficient, so you should always write that by default, but not worry about it if you forget. Six months ago, I thought the same as you, that i++ was more natural, but it's purely what you're used to. EDIT 1: Scott Meyers, in "More Effective C++" who I generally trust on this thing, says you should in general avoid using the postfix operator on user-defined types (because the only sane implementation of the postfix increment function is to make a copy of the object, call the prefix increment function to perform the increment, and return the copy, but copy operations can be expensive). So, we don't know whether there are any general rules about (a) whether that is true today, (b) whether it also applies (less so) to intrinsic types (c) whether you should be using "++" on anything more than a lightweight iterator class ever. But for all the reasons I described above, it doesn't matter, do what I said before. EDIT 2: This refers to general practice. If you think it DOES matter in some specific instance, then you should profile it and see. Profiling is easy and cheap and works. Deducing from first principles what needs to be optimized is hard and expensive and doesn't work. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/59880",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/6440/"
]
} |
59,928 | From reading the descriptions, I understand that in TDD tests are done prior to writing the function and in Unit Testing, its done afterwards. Is this the main difference, or the two terms can't even be compared as such. Perhaps, Unit Testing is an integrated part of TDD. | Unit Testing refers to what you are testing, TDD to when you are testing. The two are orthogonal. Unit Testing means, well, testing individual units of behavior. An individual unit of behavior is the smallest possible unit of behavior that can be individually tested in isolation. (I know that those two definitions are circular, but they seem to work out quite well in practice.) You can write unit tests before you write your code, after you write your code or while you write your code. TDD means (again, kind of obvious) letting your tests drive your development (and your design). You can do that with unit tests, functional tests and acceptance tests. Usually, you use all three. The most important part of TDD is the middle D . You let the tests drive you. The tests tell you what to do, what to do next, when you are done. They tell you what the API is going to be, what the design is. (This is important: TDD is not about writing tests first. There are plenty of projects that write tests first but don't practice TDD. Writing tests first is simply a prerequisite for being able to let the tests drive the development.) | {
"source": [
"https://softwareengineering.stackexchange.com/questions/59928",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/1560/"
]
} |
60,012 | I am starting to learn Scheme by the SICP videos, and I would like to move to Common Lisp next. The language seems very interesting, and most of the people writings books on it advocate that it has unequaled expressive power. CL seems to have a decent standard library. Why is not Lisp more widespread? If it is really that powerful, people should be using it all over, but instead it is nearly impossible to find, say, Lisp job advertisements. I hope it is not just the parenthesis, as they are not a great problem after a little while. | Expressiveness isn't always a positive language trait in a corporate environment. Java is extremely popular partly because its easy to learn, easy to write, and easy to read. Mediocre programmers can still be very productive in Java, even if their code is wordy and inelegant. Furthermore, it's easy to abuse expressive languages. An expert java programmer can refactor poorly written code quickly. The more expressive the language, the more difficult understanding and refactoring horrible code becomes. LISP macros are a good example. Macros are powerful tools in the right hands. In the wrong hands, they can result in confusing and hard to debug code. LISP is a risky choice for senior management. If things go wrong, no one is going to blame management for picking a popular object oriented language backed by a major corporation like Oracle or Microsoft. Its much easier to hire programmers with experience in popular, easy to learn languages. Even progressive companies willing to use a more powerful language usually don't choose LISP. This is because many of the newer languages try and compromise by borrowing powerful features from LISP, while staying easy to learn for the masses. Scala and Ruby follow this model. Bad programmers can pick them up quickly and keep writing the same mediocre code that they did in Java. Good programmers can take advantage of the more advanced features to write beautiful code. Parentheses are not the problem. Haskell is an incredibly powerful and expressive language with a syntax similar to Python or Ruby and it hasn't been widely adopted for many of the same reasons as LISP. Despite all this, I am hoping... Clojure has a chance of becoming popular. It runs on the JVM, has great interop with Java, and makes concurrent programming much simpler. These are all
important things to many companies. *This is my perspective as a professional JVM programmer with experience in Java, Clojure, JRuby, and Scala. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/60012",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/15072/"
]
} |
60,028 | There seems to be a long-held belief (mainly by non-lispers) that lisp is better than most languages at AI. Where did this belief originate? And is there any basis in fact to it? | One of the key differences between LISP-like languages and other languages is that in LISPs, code and data are the same thing. This makes it possible to do things such as have a program modify some of it's algorithms during runtime as it "learns" new things, as a native part of the language. Another aspect that goes into this, though not as much, is LISP's ability to easily add new language semantics through macros. This makes it possible to actually go in and define a DSL that your AI works with and can evolve in, with the potential for that language to grow, self-correct, and evolve while the AI is running. Agreeing with Quadrescence, LISPs history of use goes a long way towards LISPs image that it's good for AI. Why is LISP used for AI covers the history in much more detail. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/60028",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/2448/"
]
} |
60,097 | I'm a student working on my B.E(CS) and my question is the following: Is testing in the software field needed? If we create a software with great care, then why should we test? After testing can we be sure that we have achieved this goal (the product/software works as intended) because we have done the testing for it? Is it possible? My question: Is testing of software needed ? | Yes. Because no matter how good you are, you can't think of everything. You will also be asked to make your software do things that you never intended for it to do. You will also never have requirements that are so clear that you will be able to think of every possibility to make sure the code doesn't break. You will also work with other people's software and apis that will not always work as intended, but you will assume that it does, or should, leading to a defects in your "perfect" case. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/60097",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/19216/"
]
} |
60,324 | Does anyone have any tips, thoughts, warnings, or general wisdom for a application/database developer who is moving specifically from a start-up sized company to a large organization? Examples thoughts would include things such as: How might I interact differently with
the management chain? Do you see trends in quality or speed of development that differ
between large and small? Thoughts on team developent. Social Aspects. Anything else. Addition: Does anyone have any personal stories and experiences to share with a similar move? Please let me know if I can clarify in any way. I appreciate any thoughts! | A few personal experience to share with: Before the move: Don't trust all the great promises. As they are looking for talent, they will show you all the good sides and hide those bad facts. If the position is that good, why is it not filled before me? :-) A business is a business, the only objective is to make a profit. Think, whether bringing you onboard adds value to the objective. You are invited because they think you bring added value. Will you? Assuming you are a programmer, big companies usually come with complexity other than technical challenges, e.g. politics, communication skills, regulations, ... Are you ready? After the Move: Try to identify the KPI of your functional group (department) as early as possible. To put it simply, why this big company is will to paying money for this group of people doing these stuff? Position yourself as a contributing factor of the above answer (if found). Don't fight the borg. You are not going to win. You are paid to comply. Make good stuff and do a good job is usually not the most difficult part. When things go well: Improve stuff bit by bit, don't sit and complain. Don't afraid to take the hard tasks. You are less likely to be removed if you are in the key role. Use resource as if it is the last drop of water on earth. Think again and again whether a managerial role is good for you and your future career path. Not too many engineers are good managers. When things go wrong: Remember you have at lease a month (of time or money ;-) don't panic. Again, don't fight. If they can change their mind, they already did. No matter what, sh_ts happen. It's not about right or wrong, it's about match or not. The world is bigger than one company. Opportunities are for those who are ready to take. Cheers! | {
"source": [
"https://softwareengineering.stackexchange.com/questions/60324",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/20908/"
]
} |
60,372 | When I was studying in the university I often heard the idea that Fortran compilers produced faster code than C compilers for an equivalent program. The key reasoning went like this: a Fortran compiler emits on average 1,1 processor instruction per line of code, while a C compiler emits on average 1,6 processor instruction per line of code - I don't remember the exact numbers but the idea was that C compilers emitted noticeably more machine code and therefore produced slower programs. How valid is such comparison? Can we say that Fortran compilers produce faster programs than C compilers or vice versa and why does this difference exist? | IIRC one of the main reasons why Fortran is said to be faster is the absence of pointer aliasing , so they can use optimizations that C compilers can't use: In FORTRAN, function arguments may not alias each other, and the compiler assumes they do not. This enables excellent optimization, and is one major reason for FORTRAN's reputation as a fast language. (Note that aliasing may still occur within a FORTRAN function. For instance, if A is an array and i and j are indices which happen to have the same value, then A[i] and A[j] are two different names for the same memory location. Fortunately, since the base array must have the same name, index analysis can be done to determine cases where A[i] and A[j] cannot alias.) But I agree with others here: Comparing the average number of assembler instructions generated for a line of code is complete nonsense. For instance a modern x86 core can execute two instructions in parallel if they don't access the same registers. So you can (in theory) gain a performance increase of 100% for the same set on instructions just by reordering them . Good compilers will also often generate more assembly instructions to get faster code (think loop unrolling, inlining). The total number of assembler instructions says very little about the performance of a piece of code. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/60372",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/587/"
]
} |
60,374 | Is functional programming so related to mathematics because much of the functional programming is depicted with mathematical notions? Is it a MUST to have a strong base of
maths to learn & understand functional programming for a programmer with a imperative background? | All programming is related to mathematics. Indeed many universities still place their computer science programs under the purview of the mathematics department. As for learning functional programming, you do not need to have a strong base in mathematics to learn it. I've learnt three different functional languages now to reasonable proficiency (Haskell, Erlang, Clojure) and my own maths skills are extremely weak. Haskell's community can, indeed, get a little bit annoying in its maths-focused way of speaking about things, but Erlang and Clojure both are very pragmatic functional programming languages that are not that hard to pick up because the tutorial information is written, seemingly, for programmers, not hard-core maths geeks. That being said, despite my handicap in maths I did pick up Haskell, so it's not impossible. The real difficulty I've found in picking up declarative programming languages in general (of which functional is a subset) is giving up that urge to be in control; to tell the computer what to do. It takes some getting used to. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/60374",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/8538/"
]
} |
60,544 | Is it that DirectX is easier or better than OpenGL, even if OpenGL is cross-platform? Why do we not see real powerful games for Linux like there are for Windows? | Many of the answers here are really, really good. But the OpenGL and Direct3D (D3D) issue should probably be addressed. And that requires... a history lesson. And before we begin, I know far more about OpenGL than I do about Direct3D . I've never written a line of D3D code in my life, and I've written tutorials on OpenGL. So what I'm about to say isn't a question of bias. It is simply a matter of history. Birth of Conflict One day, sometime in the early 90's, Microsoft looked around. They saw the SNES and Sega Genesis being awesome, running lots of action games and such. And they saw DOS . Developers coded DOS games like console games: direct to the metal. Unlike consoles however, where a developer who made an SNES game knew what hardware the user would have, DOS developers had to write for multiple possible configurations. And this is rather harder than it sounds. And Microsoft had a bigger problem: Windows. See, Windows wanted to own the hardware, unlike DOS which pretty much let developers do whatever. Owning the hardware is necessary in order to have cooperation between applications. Cooperation is exactly what game developers hate because it takes up precious hardware resources they could be using to be awesome. In order to promote game development on Windows, Microsoft needed a uniform API that was low-level, ran on Windows without being slowed down by it, and most of all cross-hardware . A single API for all graphics, sound, and input hardware. Thus, DirectX was born. 3D accelerators were born a few months later. And Microsoft ran into a spot of trouble. See, DirectDraw, the graphics component of DirectX, only dealt with 2D graphics: allocating graphics memory and doing bit-blits between different allocated sections of memory. So Microsoft purchased a bit of middleware and fashioned it into Direct3D Version 3. It was universally reviled. And with good reason; looking at D3D v3 code is like staring into the Ark of the Covenant. Old John Carmack at Id Software took one look at that trash and said, "Screw that!" and decided to write towards another API: OpenGL. See, another part of the many-headed-beast that is Microsoft had been busy working with SGI on an OpenGL implementation for Windows. The idea here was to court developers of typical GL applications: workstation apps. CAD tools, modelling, that sort of thing. Games were the farthest thing on their mind. This was primarily a Windows NT thing, but Microsoft decided to add it to Win95 too. As a way to entice workstation developers to Windows, Microsoft decided to try to bribe them with access to these newfangled 3D graphics cards. Microsoft implemented the Installable Client Driver protocol: a graphics card maker could override Microsoft's software OpenGL implementation with a hardware-based one. Code could automatically just use a hardware OpenGL implementation if one was available. In the early days, consumer-level videocards did not have support for OpenGL though. That didn't stop Carmack from just porting Quake to OpenGL (GLQuake) on his SGI workstation. As we can read from the GLQuake readme: Theoretically, glquake will run on any compliant OpenGL that supports the
texture objects extensions, but unless it is very powerfull hardware that
accelerates everything needed, the game play will not be acceptable. If it
has to go through any software emulation paths, the performance will likely
by well under one frame per second. At this time (march β97), the only standard opengl hardware that can play
glquake reasonably is an intergraph realizm, which is a VERY expensive card.
3dlabs has been improving their performance significantly, but with the
available drivers it still isnβt good enough to play. Some of the current
3dlabs drivers for glint and permedia boards can also crash NT when exiting
from a full screen run, so I donβt recommend running glquake on 3dlabs
hardware. 3dfx has provided an opengl32.dll that implements everything glquake needs,
but it is not a full opengl implementation. Other opengl applications are
very unlikely to work with it, so consider it basically a βglquake driverβ. This was the birth of the miniGL drivers. These evolved into full OpenGL implementations eventually, as hardware became powerful enough to implement most OpenGL functionality in hardware. nVidia was the first to offer a full OpenGL implementation. Many other vendors struggled, which is one reason why developers preferred Direct3D: they were compatible on a wider range of hardware.
Eventually only nVidia and ATI (now AMD) remained, and both had a good OpenGL implementation. OpenGL Ascendant Thus the stage is set: Direct3D vs. OpenGL. It's really an amazing story, considering how bad D3D v3 was. The OpenGL Architectural Review Board (ARB) is the organization responsible for maintaining OpenGL. They issue a number of extensions, maintain the extension repository, and create new versions of the API. The ARB is a committee made of many of the graphics industry players, as well as some OS makers. Apple and Microsoft have at various times been a member of the ARB. 3Dfx comes out with the Voodoo2. This is the first hardware that can do multitexturing, which is something that OpenGL couldn't do before. While 3Dfx was strongly against OpenGL, NVIDIA, makers of the next multitexturing graphics chip (the TNT1), loved it. So the ARB issued an extension: GL_ARB_multitexture, which would allow access to multitexturing. Meanwhile, Direct3D v5 comes out. Now, D3D has become an actual API , rather than something a cat might vomit up. The problem? No multitexturing. Oops. Now, that one wouldn't hurt nearly as much as it should have, because people didn't use multitexturing much. Not directly. Multitexturing hurt performance quite a bit, and in many cases it wasn't worth it compared to multi-passing. And of course, game developers love to ensure that their games works on older hardware, which didn't have multitexturing, so many games shipped without it. D3D was thus given a reprieve. Time passes and NVIDIA deploys the GeForce 256 (not GeForce GT-250; the very first GeForce), pretty much ending competition in graphics cards for the next two years. The main selling point is the ability to do vertex transform and lighting (T&L) in hardware. Not only that, NVIDIA loved OpenGL so much that their T&L engine effectively was OpenGL. Almost literally; as I understand, some of their registers actually took OpenGL enumerators directly as values. Direct3D v6 comes out. Multitexture at last but... no hardware T&L. OpenGL had always had a T&L pipeline, even though before the 256 it was implemented in software. So it was very easy for NVIDIA to just convert their software implementation to a hardware solution. It wouldn't be until D3D v7 until D3D finally had hardware T&L support. Dawn of Shaders, Twilight of OpenGL Then, GeForce 3 came out. And a lot of things happened at the same time. Microsoft had decided that they weren't going to be late again. So instead of looking at what NVIDIA was doing and then copying it after the fact, they took the astonishing position of going to them and talking to them. And then they fell in love and had a little console together. A messy divorce ensued later. But that's for another time. What this meant for the PC was that GeForce 3 came out simultaneously with D3D v8. And it's not hard to see how GeForce 3 influenced D3D 8's shaders. The pixel shaders of Shader Model 1.0 were extremely specific to NVIDIA's hardware. There was no attempt made whatsoever at abstracting NVIDIA's hardware; SM 1.0 was just whatever the GeForce 3 did. When ATI started to jump into the performance graphics card race with the Radeon 8500, there was a problem. The 8500's pixel processing pipeline was more powerful than NVIDIA's stuff. So Microsoft issued Shader Model 1.1, which basically was "Whatever the 8500 does." That may sound like a failure on D3D's part. But failure and success are matters of degrees. And epic failure was happening in OpenGL-land. NVIDIA loved OpenGL, so when GeForce 3 hit, they released a slew of OpenGL extensions. Proprietary OpenGL extensions: NVIDIA-only. Naturally, when the 8500 showed up, it couldn't use any of them. See, at least in D3D 8 land, you could run your SM 1.0 shaders on ATI hardware. Sure, you had to write new shaders to take advantage of the 8500's coolness, but at least your code worked . In order to have shaders of any kind on Radeon 8500 in OpenGL, ATI had to write a number of OpenGL extensions. Proprietary OpenGL extensions: ATI-only. So you needed an NVIDIA codepath and an ATI codepath, just to have shaders at all. Now, you might ask, "Where was the OpenGL ARB, whose job it was to keep OpenGL current?" Where many committees often end up: off being stupid. See, I mentioned ARB_multitexture above because it factors deeply into all of this. The ARB seemed (from an outsider's perspective) to want to avoid the idea of shaders altogether. They figured that if they slapped enough configurability onto the fixed-function pipeline, they could equal the ability of a shader pipeline. So the ARB released extension after extension. Every extension with the words "texture_env" in it was yet another attempt to patch this aging design. Check the registry: between ARB and EXT extensions, there were eight of these extensions made. Many were promoted to OpenGL core versions. Microsoft was a part of the ARB at this time; they left around the time D3D 9 hit. So it is entirely possible that they were working to sabotage OpenGL in some way. I personally doubt this theory for two reasons. One, they would have had to get help from other ARB members to do that, since each member only gets one vote. And most importantly two, the ARB didn't need Microsoft's help to screw things up. We'll see further evidence of that. Eventually the ARB, likely under threat from both ATI and NVIDIA (both active members) eventually pulled their head out long enough to provide actual assembly-style shaders. Want something even stupider? Hardware T&L. Something OpenGL had first . Well, it's interesting. To get the maximum possible performance from hardware T&L, you need to store your vertex data on the GPU. After all, it's the GPU that actually wants to use your vertex data. In D3D v7, Microsoft introduced the concept of Vertex Buffers. These are allocated swaths of GPU memory for storing vertex data. Want to know when OpenGL got their equivalent of this? Oh, NVIDIA, being a lover of all things OpenGL (so long as they are proprietary NVIDIA extensions), released the vertex array range extension when the GeForce 256 first hit. But when did the ARB decide to provide similar functionality? Two years later . This was after they approved vertex and fragment shaders (pixel in D3D language). That's how long it took the ARB to develop a cross-platform solution for storing vertex data in GPU memory. Again, something that hardware T&L needs to achieve maximum performance. One Language to Ruin Them All So, the OpenGL development environment was fractured for a time. No cross-hardware shaders, no cross-hardware GPU vertex storage, while D3D users enjoyed both. Could it get worse? You... you could say that. Enter 3D Labs . Who are they, you might ask? They are a defunct company whom I consider to be the true killers of OpenGL. Sure, the ARB's general ineptness made OpenGL vulnerable when it should have been owning D3D. But 3D Labs is perhaps the single biggest reason to my mind for OpenGL's current market state. What could they have possibly done to cause that? They designed the OpenGL Shading Language. See, 3D Labs was a dying company. Their expensive GPUs were being marginalized by NVIDIA's increasing pressure on the workstation market. And unlike NVIDIA, 3D Labs did not have any presence in the mainstream market; if NVIDIA won, they died. Which they did. So, in a bid to remain relevant in a world that didn't want their products, 3D Labs showed up to a Game Developer Conference wielding presentations for something they called "OpenGL 2.0". This would be a complete, from-scratch rewrite of the OpenGL API. And that makes sense; there was a lot of cruft in OpenGL's API at the time (note: that cruft still exists). Just look at how texture loading and binding work; it's semi-arcane. Part of their proposal was a shading language. Naturally. However, unlike the current cross-platform ARB extensions, their shading language was "high-level" (C is high-level for a shading language. Yes, really). Now, Microsoft was working on their own high-level shading language. Which they, in all of Microsoft's collective imagination, called... the High Level Shading Language (HLSL). But their was a fundamentally different approach to the languages. The biggest issue with 3D Labs's shader language was that it was built-in. See, HLSL was a language Microsoft defined. They released a compiler for it, and it generated Shader Model 2.0 (or later shader models) assembly code, which you would feed into D3D. In the D3D v9 days, HLSL was never touched by D3D directly. It was a nice abstraction, but it was purely optional. And a developer always had the opportunity to go behind the compiler and tweak the output for maximum performance. The 3D Labs language had none of that. You gave the driver the C-like language, and it produced a shader. End of story. Not an assembly shader, not something you feed something else. The actual OpenGL object representing a shader. What this meant is that OpenGL users were open to the vagaries of developers who were just getting the hang of compiling assembly-like languages. Compiler bugs ran rampant in the newly christened OpenGL Shading Language (GLSL). What's worse, if you managed to get a shader to compile on multiple platforms correctly (no mean feat), you were still subjected to the optimizers of the day. Which were not as optimal as they could be. While that was the biggest flaw in GLSL, it wasn't the only flaw. By far . In D3D, and in the older assembly languages in OpenGL, you could mix and match vertex and fragment (pixel) shaders. So long as they communicated with the same interface, you could use any vertex shader with any compatible fragment shader. And there were even levels of incompatibility they could accept; a vertex shader could write an output that the fragment shader didn't read. And so forth. GLSL didn't have any of that. Vertex and fragment shaders were fused together into what 3D Labs called a "program object". So if you wanted to share vertex and fragment programs, you had to build multiple program objects. And this caused the second biggest problem. See, 3D Labs thought they were being clever. They based GLSL's compilation model on C/C++. You take a .c or .cpp and compile it into an object file. Then you take one or more object files and link them into a program. So that's how GLSL compiles: you compile your shader (vertex or fragment) into a shader object. Then you put those shader objects in a program object, and link them together to form your actual program. While this did allow potential cool ideas like having "library" shaders that contained extra code that the main shaders could call, what it meant in practice was that shaders were compiled twice . Once in the compilation stage and once in the linking stage. NVIDIA's compiler in particular was known for basically running the compile twice. It didn't generate some kind of object code intermediary; it just compiled it once and threw away the answer, then compiled it again at link time. So even if you want to link your vertex shader to two different fragment shaders, you have to do a lot more compiling than in D3D. Especially since the compiling of a C-like language was all done offline , not at the beginning of the program's execution. There were other issues with GLSL. Perhaps it seems wrong to lay the blame on 3D Labs, since the ARB did eventually approve and incorporate the language (but nothing else of their "OpenGL 2.0" initiative). But it was their idea. And here's the really sad part: 3D Labs was right (mostly). GLSL is not a vector-based shading language the way HLSL was at the time. This was because 3D Labs's hardware was scalar hardware (similar to modern NVIDIA hardware), but they were ultimately right in the direction many hardware makers went with their hardware. They were right to go with a compile-online model for a "high-level" language. D3D even switched to that eventually. The problem was that 3D Labs were right at the wrong time . And in trying to summon the future too early, in trying to be future-proof, they cast aside the present . It sounds similar to how OpenGL always had the possibility for T&L functionality. Except that OpenGL's T&L pipeline was still useful before hardware T&L, while GLSL was a liability before the world caught up to it. GLSL is a good language now . But for the time? It was horrible. And OpenGL suffered for it. Falling Towards Apotheosis While I maintain that 3D Labs struck the fatal blow, it was the ARB itself who would drive the last nail in the coffin. This is a story you may have heard of. By the time of OpenGL 2.1, OpenGL was running into a problem. It had a lot of legacy cruft. The API wasn't easy to use anymore. There were 5 ways to do things, and no idea which was the fastest. You could "learn" OpenGL with simple tutorials, but you didn't really learn the OpenGL API that gave you real performance and graphical power. So the ARB decided to attempt another re-invention of OpenGL. This was similar to 3D Labs's "OpenGL 2.0", but better because the ARB was behind it. They called it "Longs Peak." What is so bad about taking some time to improve the API? This was bad because Microsoft had left themselves vulnerable. See, this was at the time of the Vista switchover. With Vista, Microsoft decided to institute some much-needed changes in display drivers. They forced drivers to submit to the OS for graphics memory virtualization and various other things. While one can debate the merits of this or whether it was actually possible, the fact remains this: Microsoft deemed D3D 10 to be Vista (and above) only. Even if you had hardware that was capable of D3D 10, you couldn't run D3D 10 applications without also running Vista. You might also remember that Vista... um, let's just say that it didn't work out well. So you had an underperforming OS, a new API that only ran on that OS, and a fresh generation of hardware that needed that API and OS to do anything more than be faster than the previous generation. However, developers could access D3D 10-class features via OpenGL. Well, they could if the ARB hadn't been busy working on Longs Peak. Basically, the ARB spent a good year and a half to two years worth of work to make the API better. By the time OpenGL 3.0 actually came out, Vista adoption was up, Win7 was around the corner to put Vista behind them, and most game developers didn't care about D3D-10 class features anyway. After all, D3D 10 hardware ran D3D 9 applications just fine. And with the rise of PC-to-console ports (or PC developers jumping ship to console development. Take your pick), developers didn't need D3D 10 class features. Now, if developers had access to those features earlier via OpenGL on WinXP machines, then OpenGL development might have received a much-needed shot in the arm. But the ARB missed their opportunity. And do you want to know the worst part? Despite spending two precious years attempting to rebuild the API from scratch... they still failed and just reverted back to the status quo (except for a deprecation mechanism). So not only did the ARB miss a crucial window of opportunity, they didn't even get done the task that made them miss that chance. Pretty much epic fail all around. And that's the tale of OpenGL vs. Direct3D. A tale of missed opportunities, gross stupidity, willful blindness, and simple foolishness. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/60544",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/20093/"
]
} |
60,545 | This is probably going to sound messed up, but here it goes. I've been working on a project for a client for a while now. I wasn't given any details except for "It has to be an XYZ plugin and interface with ABC product". Which was fine, but now we're towards the end (I think) and it's just dragging out. I don't have any time to spend on it and I'm already over schedule by 3 months. Trying to get the client to describe to me how he would like to be able to navigate the data (a UI issue) is just difficult. I've submitted mock ups on what I think he wants but his latest response is "you should look at XXX product", it has similar functionality. Of course, I looked at it and it looks similar to what I submitted, but I don't think that the way I've built the framework is going to support what he is now describing to me. We've had good communication throguh out the process but he doesn't know what he wants. I explained how I was going to build the framework and he agreed, so it isn't a bad choice on my part about design. When I go over what I think are finalized modules, he says, "You should have done it this way" which requires me to go back and rework code and UI. Some smaller items could have been better thought out by me, but the big things are how I interpreted his requirements and I've gone over this module several times during development. I've already received final funds last month so i'm working for free at this point. I no longer want to deal with this project. I've already received payment. I've done other successful projects with this client before and he has a lot of other projects he wants to do. What the heck should I do? I don't want to work on this project anymore. I don't want to ask for any more money (money isn't really the issue). I don't want to make him mad either. I know it looks like I want to have my cake and eat it too. If you think I should call it quits, how should I do it given the circumstances? | First, you need to get out of the mindset that you are now working for free, just because you've gotten what you believe is the final payment. You agreed to a price and were paid. If you had received all of the funds up front before even starting, would you have been doing the entire project for free? (BTW this is why I never work on fixed-price projects; I always insist on working by the hour.) If you can show that what the client has requested goes way beyond what you originally signed up for, then you could ask for more money, but as you indicated that doesn't seem to be the issue. It sounds like you are just tired of the project. Unfortunately that's not a good reason to quit. If you had a defined specification at the beginning, and have met that spec, then you could ethically walk away from the project but you most certainly will never get any more work from this client again. It would be better to finish up what the client wants, spending as little of your time as possible, and hope to do better next time. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/60545",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/6127/"
]
} |
60,679 | I've been interning at a place where my manager believes that if you are in a, product company , then you generally spend time tweaking the product and sometimes adding some features, or service company , then you keep doing repetitive things which makes me feel industry is no place for someone who likes to do create news things and solve difficult problems. So is the industry not a place for a passionate programmer? Does this change from country to country? Update to clear some things that can be understood differently than what they were meant. Tweaking here is making sure your product has tables with the number of rows and columns the client wants, etc. Customize it for the customer. New "feature" isn't new functionality here. Just aesthetic-level changes. And it's sometimes. I'm not sure what he meant by repetitive though. He was like, you have to make the UI again and again every time. (I see no repetition there though. If a different UI is needed then a different UI needs to be designed. If you can use the old one then you don't need to do much anyway.) | Your manager needs a shrink ;) Or you need to be aware of tiny frogs. There once was a bunch of tiny frogs ,... β¦ who arranged a running
competition. The goal was to reach the top of a
very high tower. A big crowd had gathered around the
tower to see the race and cheer on the
contestants⦠The race began⦠Honestly: No one in the crowd really
believed that the tiny frogs would
reach the top of the tower. You heard statements such as: βOh, WAY
too difficult!! They will NEVER make
it to the top.β or: βNot a chance that they will succeed.
The tower is too high!β The tiny frogs began collapsing. One
by one⦠... Except for those who in a
fresh tempo were climbing higher and
higherβ¦ The crowd continued to yell βIt is too
difficult!!! No one will make it!β More tiny frogs got tired and gave upβ¦ ...But ONE continued higher and higher
and higherβ¦ This one wouldnβt give
up! At the end everyone else had given up
climbing the tower. Except for the one
tiny frog who after a big effort was
the only one who reached the top! THEN all of the other tiny frogs
naturally wanted to know how this one
frog managed to do it? A contestant asked the tiny frog how
the one who succeeded had found the
strength to reach the goal? It turned out⦠That the winner was
DEAF!!!! The wisdom of this story is: Never listen to other peopleβs
tendencies to be negative or
pessimisticβ¦ β¦cause they take your
most wonderful dreams and wishes away
from you. The ones you have in your heart! Always think of the power words have.
Because everything you hear and read
will affect your actions! Therefore: ALWAYS be⦠POSITIVE! And above all: Be DEAF when people tell YOU that YOU
can not fulfil YOUR dreams! Always
think: I can do this! That version of this well known story can be found here in its context. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/60679",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/5940/"
]
} |
60,699 | I want to ask you whether adding some "easter eggs" in the source documentation is unprofessional or not. Probably you have read the StackOverflow poll for funny comments in a source documentation, and I have personally stumbled at many such things during my work, including funny (or not) stuff in public API documentation (for example this weak BZZZTT!!1! thing in Android public documentation, I can give at least a dozen more examples). I can't come to a final opinion for myself, because I have contradicting arguments by myself. Pro argument: It can cheer up somebody, and make his/her day funnier/more productive. Major portion of the source code doesn't need to be commented anyway (if the project is done properly), because the specific method (for example) is self explanatory, or if it is a pile of strange crappy code, it can't be explained in a meaningful way, so a funny joke doesn't harm the possible info that you can obtain from the doc. Cons argument: If you are very concentrated/frustrated, the last thing you need is somebody's stupid joke, instead of giving you information you need about the documented code portion, it can just make you even more frustrated. And the idea of what the documentation would look like if everybody starts doing so is horrible. Plus the guy who writes the joke may be the only one who thinks that it is funny/interesting/worth wasting time to read it. What do you think? | I'm a big fan of funny commenting . You should always be professional in your commenting, but some humor won't kill the reader. Especially if the reader is a member of your team. What I dislike the most, is developers that take themselves too seriously. I think we should have fun at work, or work is not worth it. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/60699",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/20833/"
]
} |
60,774 | I'm trying to get a good grasp of how to implement good decoupling between a UI and the model, but I'm having trouble figuring out exactly where to divide the lines. I've been looking at Model-View-Presenter, but I'm not sure exactly how to go about implementing it. For example, my View has multiple dialogs.. Should there be a View class with instances of each of the dialogs? Then in that case, how should the dialogs interact with the Presenter? ie. if an individual dialog needs to request data from the Model via the Presenter, how should the dialog get a reference to the Presenter? Via a reference to the View given to it during construction? I was thinking maybe the view should be a static class? Then the dialogs GetView and get the Presenter from there... I'd been thinking about setting it up the Presenter with ownership of the View and Model (as opposed to the View having the Presenter and Presenter having Model) and the Presenter registering callbacks for events in the View, but that makes it seem a lot more coupled (or language depended, at least.) I'm trying to: make this as decoupled as possible ideally make it possible to couple the Presenter/Model with Views of other languages (I've not done a ton of inter-language stuff, but I know it's possible, particularly the more void(void) I can stick to, at least a C# app with a C++ library... keep the code clean and simple So.. any suggestions how the interactions should be handled? | Welcome to a slippery slope. You've by this point realized that there is an endless variation of all the model-view interactions. MVC, MVP ( Taligent , Dolphin, Passive View), MVVM just to name a few. The Model View Presenter (MVP) pattern, like most architectural patterns is open to a lot of variety and experimentation. The one thing all the variations have in common is the role of the presenter as a "middleman" between the view and the model. The two most common are the Passive View and the Supervising Presenter/Controller - [ Fowler ]. Passive View treats the UI as a very shallow interface between the user and the presenter. It contains very little if any logic, delegating as much responsibility to a presenter. Supervising Presenter/Controller tries to take advantage of the data binding built into many UI frameworks. The UI handles data synchronization but presenter/controller steps in for more complex logic. In either case the model, view and presenter form a triad There are many ways to do this. Its very common to see this handled by treating each dialog/form as a different view. Many times there's a 1:1 relationship between views and presenters. This isn't a hard, fast rule. Its quite common to have one presenter handle multiple related views or vice versa. It all depends on the complexity of the view and the complexity of the business logic. As for how views and presenters obtain a reference to each other, this is sometimes called wiring . You have three choices: View holds a reference to presenter A form or dialog implements a view. The form has event handlers that delgate to a presenter using direct function calls: MyForm.SomeEvent(Sender)
{
Presenter.DoSomething(Sender.Data);
} Since the presenter doesn't have a reference to the view, the view has to send it data as arguments. The presenter can communicate back to the view by using events/callback functions which the view must listen for. Presenter holds a reference to view In the scenario the view exposes properties for the data it displays to the user. The presenter listens for events and manipulates the properties on the view: Presenter.SomeEvent(Sender)
{
DomainObject.DoSomething(View.SomeProperty);
View.SomeOtherProperty = DomainObject.SomeData;
} Both hold a reference to each other forming a circular dependency This scenario is actually easier to work with than the others. The view responds to events by calling methods in the presenter. The presenter read/modifies data from the view through exposed properties. View.SomeEvent(Sender)
{
Presenter.DoSomething();
}
Presenter.DoSomething()
{
View.SomeProperty = DomainObject.Calc(View.SomeProperty);
} There are other issues to be considered with the MVP patterns. Creation order, object lifetime, where the wiring takes place, communication among MVP triads but this answer has grown long enough already. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/60774",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/20850/"
]
} |
60,949 | I just tried integrating my website with facebook. I got a lot of copy-paste code from the facebook developers site. I just put the code and it works fine. Do you call this kind of programmers "code monkeys"? If you say I am a code monkey, in the same case what would you expect me to do? | A Code monkey is a computer programmer or other person who writes computer code for a living. This term may be slightly derogatory, meaning that this developer can write some code but is unable to (or not supposed to) perform the more complex tasks of software architecture, analysis, and design. It is usually applied to junior programmers or programmers hired in the USA. source So, yes, but it is not as bad as you make it sound. It sounds like you are more scared of being a "script kiddie". Reusing code is not bad if you understand what it is doing. If there are good tutorials out there that satisfy what you need, you should not be spending your time re-inventing the wheel. I think the real question is if you like Fritos, Tab and Mountain Dew . | {
"source": [
"https://softwareengineering.stackexchange.com/questions/60949",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/6373/"
]
} |
60,994 | Possible Duplicate: Why are software schedules so hard to define? I often need to explain to senior management why software estimation is so hard, and why our preliminary estimates are often so far out. I suspect some want to know why it is not mathematically precise engineering, like building a bridge. You can help me by listing a few dot points relevant to this subject. Many thanks! | Software estimation isn't actually more difficult than estimating other types of work. It just seems so because the CONDITIONS under which it's estimated are more difficult. Say a software company was tasked with something similar to what a car company is tasked with. Build the same thing over and over again, which has existed for decades, and with only minor variations. Furthermore, you will work from a complete and detailed spec from the beginning of the project and it will be frozen once development begins. Under those circumstances software estimation would be cake. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/60994",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/8400/"
]
} |
61,062 | I work for a large enterprise (30K employees) in the financial/insurance industry. While "IT" isn't our main focus, let's be honest, these are information driven industries and the companies with the better technological advantage seem to get ahead faster. There are many software development teams at my company. They are all over the map with version control, let alone languages/frameworks used. Some don't use any (I know), some use PVCS, some use VSS, and the most enlightened use SVN. I want to bring git to my enterprise. More specifically, I want to bring GitHub (private repositories). I know the right people to talk to about this, but let's be honest again, drastic moves like this usually get shot down in the large enterprise setting because of vague security concerns or the fact that none of our competitors are using it (and I can only cite jQuery, Ruby on Rails, Facebook, etc as references). So my question is this. What are the most compelling reasons of why a large enterprise should slowly and deliberately make the switch from PVCS/VSS/SVN to a hosted git solution like GitHub (private repo). Of course, part of my plan involves a POC for a non-essential development project. | There's a few things I might be concerned with, as a disinterested third party. So let me toss some questions at you that you'd better be prepared to answer (to your IT department): Any version control is better than none. We have plenty to choose from, what's wrong with those? Distributed version control? What's that? How do we control that? What does it cost? Not just the software, but the servers, licenses, maintenance, etc. I don't trust GitHub, or any outsourced hosting. We need to do everything in-house. Why can't we set up our own server? Can we run it on Windows? We have to keep it on our current baseline, you know. How do we secure the thing? SVN we get, but this scares me. These are the very first questions that will come up. As to VSS and PVCS you can probably come up with a bunch of reasonably good arguments (like VSS corrupting version history). SVN will be a bit more difficult. I highly recommend focusing on the merge capabilities of GIT, and also recommend keeping an open mind about Mercurial. Every argument for GIT is also an argument for Mercurial--and Mercurial has more mature Windows support. Security is of paramount importance to financial and government institutions. They will be extremely resistant to externally hosted resources. From a risk management perspective, consider what could happen if someone hacked GitHub and stole the source code, or discovered the security vulnerability documented in the issue tracker. That would be devastating to the company. From a pure management perspective, if the company is legally required to pay you for every hour you work, how can they monitor whether you are working from home when the resources are outside their VPN network? On another note, how can they prevent you from performing some corporate espionage when all the resources are available from outside the company? These are the IT and management arguments against outsourcing the hosting. A large company has to look at things this way. For a small company, you look at the bottom line and how much it would cost to put all those services in place. It's actually cheaper for the large company to do it in house. They already have the IT resources, they just need to shuffle the responsibilities a bit. And if the solution largely takes care of itself with only periodic maintenance needed (backups and user management), all the more reason to keep it inside corporate doors. As to Windows hosting, that's an organization by organization issue. Several companies have swallowed the Windows koolaid. Others have swallowed the Linux koolaid. Others consider it on a case by case basis. You'll have to play by the rules the IT department has set for your organization. As long as your solution can be hosted on either, you are golden. Finally, in such a large organization there are guaranteed to be fiefs all wanting to do things their way. They all have convincing arguments why they chose VSS, PVCS, SVN, or what have you. To IT they are all the same. The only way to consolidate within an organization that large is to have the order come by fiat from above. Such orders are always met with resistance, and it is probably not something your company wants to do unless there are obvious Total Cost of Ownership (TCO) benefits to having a standardized version control system. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/61062",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/7822/"
]
} |
61,198 | I understand the purpose of XML, but I always hear people complain about how BAD it is? I don't really understand whats so bad about it? I usually hear the terms "bloated" and "slow" tossed around. But I guess as programmers, what do you mainly use it for? And do you really consider it "bad"....because if it is, an awful lot of people use it for transporting of data... | Xml is great for what it was designed to be -- a platform neutral, human readable data transfer protocol with some capabilities to enforce data validation at low levels. I doubt anyone who uses Xml in this manner has a real complaint. Is it the most succint wire format? No. But there are worse options. Is it as fast as reading your custom binary format? No. But your business partners can read it in whatever stack they are using. The problem, however, is that humans -- especially the breed known as enterprise architects -- are evil and take good things and make them bad. In the case of Xml, the early part of this century saw Xml as the universal hammer for every IT problem. Sprinkle in a little design by committee and you end up with some horrible monstrosities like SOAP and oXML . Neither of which should be wished on enemies, nevermind friends or colleagues. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/61198",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/-1/"
]
} |
61,248 | Currently I'm an IT student and I'm wondering what is still important in C++ today, what for is it used? I completed basic C++ course in my university but I can't imagine where can I use my knowledge and in which direction should I go learning C++. In other words what should I learn to become a successful C++ programmer? Currently I'm learning Java just because I don't see clearly in which area C++ could be useful today, but I clearly know which kind of work I'll be doing as a Java programmer. But I still hope that C++ isn't dead. | The killer feature of C++ is scope-bound resource management , SBRM (more commonly known as " RAII "). It is the only industrial programming language that is built around this concept. In C++, life times of all objects are exactly known, and (well-written) C++ programs guarantee that resources are acquired and released in fully deterministic manner. In comparison, garbage-collected or otherwise managed languages do not provide any such guarantees; in fact objects in those languages may persist after the end of their lifetime. That is the reason why C++ is used in finance, video games, high-performance embedded and real-time systems, transportation, manufacture, and other industries where determinism and precision are important. There are no alternatives. Granted, it was used for a lot more tasks than this, and those tasks are being lost to C# and Python and other more suitable languages, but that is not affecting its core niche. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/61248",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/49263/"
]
} |
61,376 | I understand what composition is in OOP, but I am not able to get a clear idea of what Aggregation is. Can someone explain? | Simple rules: A "owns" B = Composition : B has no meaning or purpose in the system without A A "uses" B = Aggregation : B exists independently (conceptually) from A Example 1: A Company is an aggregation of People. A Company is a composition of Accounts. When a Company ceases to do business its Accounts cease to exist but its People continue to exist. Example 2: (very simplified) A Text Editor owns a Buffer (composition). A Text Editor uses a File (aggregation). When the Text Editor is closed, the Buffer is destroyed but the File itself is not destroyed. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/61376",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/17887/"
]
} |
61,577 | So I've forked someone else's repository, made a few changes, submitted a pull request, and my changes made it into the product. Great! But...what should I do with my forked repository? Is there a compelling reason for me to keep my repository around, or should I go ahead and delete it? I don't plan on making any additional contributions, but if I change my mind I assume I can always just re-fork it. I'm not really concerned about keeping a backup. I'm more worried about breaking links, losing commit messages, etc. | If your pull request got accepted and you haven't made any other changes that you might use personally, you should delete it. Deleting doesn't harm anything. You can always refork if you need to It cuts down on useless repos in search results when people are searching for something If you use your GitHub as a sort of resume for potential jobs/contracts, it looks better if you don't have dozens of forked repos that you're not currently working on. You'll appear more efficient. It helps your own sanity when you don't have to page through hundreds of useless repos. Its better for GitHub. :) | {
"source": [
"https://softwareengineering.stackexchange.com/questions/61577",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/79/"
]
} |
61,655 | I love programming. I've been messing around with code since I was a kid. I never went the professional route, but I have coded several in-house applications for various employers, including a project I got roped into where I built an internal transaction management/reporting system for a bank. I pick stuff up quickly, understand a lot of the concepts, and feel at ease with the entire process of coding. That all being said, I feel like I never know if my programs are any good. Sure, they work - but is the code clean, tight, well-written stuff, or would another coder look at it and slap me in the head? I see some stuff on StackΒ Overflow that just blows my mind and makes my attempts at coding seem completely feeble. My employers have been happy with what I've done, but without peer review, I'm in the dark otherwise. I've looked into peer code review, but a lot of stuff can't be posted because of NDAs or confidentiality issues. A lot of you professionals might have teammates to look at stuff over your shoulder or bounce ideas around with, but what about the independent and solo guys out there like me? How do you learn best practices and make sure your code is up to snuff? Or does it not matter if it's "the best" as long as it runs as expected and provides a good user experience? | The biggest clue for me is: When you have to go back and add/modify a feature, is it difficult? Do you constantly break existing functionality when making changes? If the answer to the above is "yes" then you probably have a poor overall design. It is (for me at least) a bit difficult to judge a design until it is required to respond to change (within reason of course; some code is just bad and you can tell right away, but even that comes with experience.) | {
"source": [
"https://softwareengineering.stackexchange.com/questions/61655",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/21083/"
]
} |
61,683 | I searched for a standard format for using a date/time as part of a file name and was unable to come up with anything. My question is two parts: Is using time stamps to enforce unique in file names a poor practice? I could get the time from the creation date and serialize the file names (file0001.bak, file0002.bak, etc) but just including the time stamp lets perform file operations such as mv 2011-01* somewhere/ . Is there a downside to using this type of naming system? The format I am using is YYYY-mm-dd_HH-MM-SS . Is there a better format I should be using? With this format should i be concerned with file system compatibility, str_to_date_parsing concerns, etc? Thanks! edit: I might have wanted to leave out the enforce uniqueness bit since it's a single user generating backup using a cronjob (there shouldn't be any concurrency problems). | You should consider ISO 8601 format (2013-04-01T13:01:02). Yes there are standards for these things. The colons and hyphens may be omitted. The format string I usually use is %Y%m%dT%H%M%S yielding 20130401T130102. Depending on requirements I omit values from the left. In a bash script I get the date with a line like: LOGDATE=$(date +%Y%m%dT%H%M%S) | {
"source": [
"https://softwareengineering.stackexchange.com/questions/61683",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/19454/"
]
} |
61,726 | I have been curious about this for a while. What exactly is meant by "production-ready" or its variants? Most recently I was looking for information about sqlite and found this thread , where many people suggest sqlite isn't ready for production. I know the difference between development/testing and production; my definition of production is anything that is provided to the customer or will be used by non-programmers. However, there seem to be many items that aren't defined as production-ready. But in reality, they may be perfectly suited and people just have a predujice against them, e.g. sqlite, python, non-MS products, etc. Small office vs. enterprise? Single user vs. multi-user? Client vs. server? Where do you draw the line? | Depends on who you are. Programmer's definition of "production-ready": it runs it satisfies the project requirements its design was well thought out it's stable it's maintainable it's scalable it's documented Management's definition of "production-ready": it runs it'll turn a profit Sorry to rehash this old question, but I happened across it and just couldn't resist. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/61726",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/-1/"
]
} |
61,738 | Every time you are looking for a text editor, no matter what language you are using, vi and Emacs are hall-of-famers. However they are ancient, and we have better alternatives (at least I hope we do). Why are developers stuck on these two editors? Shouldn't we drop them and try to invent or look for something new? (I have full respect for Emacs and vi fans). | The main reasons why I prefer a terminal-based editor over a full-fledged IDE: Remote access. I can ssh to whatever computer I need to be on, fire up Vim and start working away. In a day-to-day basis, using screen session and Vim allows for easy access from any location. Keystrokes . There are so many keystrokes saved once you can utilise Emacs or Vim to a decent extent. Moving my hand between the keyboard and mouse annoys me... IDEs are nice to throw classes around within your project, but for me, my productivity is orders of magnitudes higher using Vim. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/61738",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/10097/"
]
} |
61,814 | The UNIX Programming Environment (the classic text) states that the UNIX approach to programming is to build small, well-defined tools that can be combined to solve more complex problems. In learning C and the Bash shell, I've found this to be a powerful concept that can be used to deal with a wide range of programming problems. Just using a Linux platform, the concept is pretty clear and used all the time. Any expression formed on the command line that redirects I/O, linking system tools like ls, grep, more, and so on shows how powerful this concept is. The thing that confuses me is that many of these programs are written in C, using an imperative/procedural programming style, yet the way they are used and joined together on the command line seems much more like functional programming to me, where each program is an isolated function that doesn't depend on the state of any other program it might be joined to. Is this accurate, understanding the UNIX programming philosophy is basically functional programming using tools that may have been built using an imperative programming style? | I think you've got a point there, but cp , rm , cd and a lot of others change state, so they aren't really functions. The UNIX philosophy is more about doing only one thing but doing it well; often doing it well means allowing functional usage, but not always. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/61814",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/23320/"
]
} |
62,060 | I often see code that include intentional misspellings of common words that for better or worse have become reserved words: klass or clazz for class : Class clazz = ThisClass.class kount for count in SQL: count(*) AS kount Personally I find this decreases readability. In my own practice I haven't found too many cases where a better name couldn't have been used β itemClass or recordTotal . An example from the JavaDocs for Class show this in the parameters: public <U> Class<? extends U> asSubclass(Class<U> clazz) Does this show a reasonable use case? | IMHO, this is a very bad idea. Reserved words are reserved for a reason, and doing this does decrease readability. I also entirely agree with your second point. Naming a variable class , even if you could do it, would be just as bad as naming it tmp or a . What kind of class? A class of what? Names should be descriptive. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/62060",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/2314/"
]
} |
62,352 | Introduction to my situation I work for a small web development company. We have a team of four ASP.NET developers, including me. Almost all of our projects (> 98%) are one-person projects that take about 1-4 weeks to complete. We don't use source or version control. The only thing we have is a shared folder on a local server that contains the latest source (== the source of the live application) of all projects. On the rare occasions that we do need to work on the same project with more than one person, we use... Beyond Compare. One or two times a day the developers ask each other if they have a version that compiles and then they synchronize their code using Beyond Compare. When only two people are working on a project, this works "rather well", but as soon as a third developers enters the process, it becomes an unmanageable piece of rubbish. Especially when everyone starts making changes to the database. I (and one or two of my fellow developers) have already told my boss several times that we should start using some form of source and version control like Git, Mercurial or TFS (our boss is very Microsoft minded). Unfortunately, my boss doesn't see the advantage of switching to a source and version control system, because in his eyes everything works just fine now and he doesn't want to invest time and money in setting up a new system and making sure everyone knows how to use it. Even after I explained the advantages (like simplified collaboration, different versions of an application, safer way to change code, ...) to him, he still doesn't think it is something we need. Of the four developers, only two (including me) have experience with source control (Git). And that experience is very limited. I know how to clone a Github repository to my computer, make some changes, commit them and push them back to Github. That's it. Explanation of my problem/concern In a few weeks we are going to start working on a rather big project for our standards. It will probably take 2-3 developers a few months to complete it. I will be the project lead (project manager and lead developer) and I will be responsible for everything. I've had my share of problems with our Beyond Compare approach and I don't want to take this road with this big project that will be my responsibility. Since I doubt that we will be able to set up our own Git server, teach everyone to work with Git and employ Git successfully in this big project, I am interested if any of you know some good ways to allow multiple people to collaborate on the same project without using source or version control. Update I would to thank everyone for their answers and comments. Here is the plan: Have a meeting with the developers to make sure all the technical people feel the same way about implementing source control. We'll make a stronger point when everyone is behind it. Present the idea to the boss and tell him that we really need source control. Implement it as soon as possible. | Please take a day to install version control and teach everybody on the project to use it. It's not that hard. Personally I've not used Git, but I have set up and used other version control systems and they are not that hard to get working. Make sure you choose one that integrates with your development environment. This will make using it virtually seamless. This will not be wasted time. The time you will lose when someone overwrites or deletes some code will cost you much more. If you don't have version control you will also spend an inordinate amount of time backing up your project and worrying about which version everyone has and which version is on the server etc. If you need to convince your boss make an estimate of the time that it will take you to set up and monitor any non-version control solution and add in the cost of rewriting a few days lost work. Don't forget to add in the cost of manually merging edits from 3 developers working on the same source file and the extra cost of getting that merge wrong. Good version control gives you this for free Then compare that to the cost of getting version control - nil (if you go open source) and setting it up - 3 man days. Don't forget that an error later in the project is going to cost more than one early on. If you have to redo the entire project because of a mistake anyone can make this will cost far more than just the rewrite time, it might cost your boss the reputation of his firm. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/62352",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/7289/"
]
} |
62,383 | Here are some arguments for properties and my counter-arguments: Easier to use than writing getter and setter methods Getter and setter method pairs are a code smell. Making it easier to write these is like making it easier to fail a math test by using a Scantron form and filling in all 'C's. Objects that contain only state, for persistence, shouldn't be using getters/setters and should be creating immutable objects at the time of persistence. What is important to a consumer of an object is what it does, not how it does it. Its behavior is what it does; its state is how it does it. If you find yourself caring about an object's state (except for persistence, though this also breaks OO), you're simply not doing OOP and losing out on its advantages. They give a rough indication of performance to consumers This is something that could change in the future, for any given property. Suppose in release 1.0, accessing PropertyX simply returns a field. In release 1.5, if the field is null, PropertyX uses the Null Object pattern to create a new null object. In release 2.0, the field is getting further validated by the getter method in PropertyX. As the property gets more and more complex, the performance indication of using a property seems less and less truthful. They're better than public fields This is true. But so are methods. They represent a fundamentally different aspect of an object than a method, and all consumers of the object should care about this Are you sure that both of the above statements are true? They're easier to type, man Sure, typing myObject.Length is easier than typing myObject.Length() , but couldn't that be fixed with a little syntactic sugar? Why use methods instead of properties? No performance guarantees. The API will remain truthful even if the method gets more complex. The consumer will need to profile their code if they are running in to performance issues, and not rely on word-of-API. Less for the consumer to think about. Does this property have a setter? A method sure doesn't. The consumer is thinking from a proper OOP mindset. As a consumer of an API, I am interested in interacting with an object's behavior. When I see properties in the API, it looks a lot like state. In fact, if the properties do too much, they shouldn't even be properties, so really, properties in an API ARE state as they appear to consumers. The programmer of the API will think more deeply about methods with return values, and avoid modifying the object's state in such methods, if possible. Separation of commands from queries should be enforced wherever possible. So I ask you, why use properties instead of methods? Most of the points on MSDN are code smells in and of themselves, and don't belong in either properties or methods. (These thoughts came to me after thinking about CQS.) | Why is it "methods against properties"? A method does something. A property is, well, a member of an object. They're totally different things, although two kinds of methods commonly written - getters and setters - correspond to properties. Since comparing properties with methods in general is not meaningful, I'll assume you meant to talk about getters/setters. Getter and setter method pairs are a code smell. Not necessarily, I'd argue, but either way this isn't a matter of getters/setters vs. properties but a matter of whether either should be used. Also note that you can e.g. leave out the setX part just like you can make read-only properties. What is important to a consumer of an object is what it does, not how it does it. Its behavior is what it does; its state is how it does it. If you find yourself caring about an object's state (except for persistence, though this also breaks OO), you're simply not doing OOP and losing out on its advantages. A highly questionable attitude. If the GUI wants to output data held by a DataStuffManagerImpl , it needs to get that number from it somehow (and no, cramming half of the application into the widget class is not an option). As the property gets more and more complex, the performance indication of using a property seems less and less truthful. [Methods have] No performance guarantees. The API will remain truthful even if the method gets more complex. The consumer will need to profile their code if they are running in to performance issues, and not rely on word-of-API. In almost all cases, all the validation logic etc. is still effectively O(1), or otherwise negible in cost. If not, perhaps you've gone too far and it's time for a change. And a getX() method is usually treated as cheap as well! Sure, typing myObject.Length is easier than typing myObject.Length(), but couldn't that be fixed with a little syntactic sugar? Properties are that syntactic sugar. Less for the consumer to think about. Does this property have a setter? A method sure doesn't. "Is there a setter for this getter?" Same difference. The programmer of the API will think more deeply about methods with return values, and avoid modifying the object's state in such methods, if possible. Separation of commands from queries should be enforced wherever possible. I'm not sure what you're trying to tell us here. For properties, there's an even stronger unwritten law not to cause side effects. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/62383",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/214/"
]
} |
62,685 | I've always heard that C is the language of choice to use for embedded systems, or anything that needs to run at maximum speed. I never developed a fondness for C, mostly because I don't like pointer arithmetic and the language is barely a rung above assembler. On the other hand, ML languages are functional, garbage collected languages, and OCaml even has an object model, yet they have a reputation for being as fast as C. ML languages have the abstraction anyone could ask for to write high-level, concise code, yet it retains the speed necessary for writing high-performance applications. OCaml in particular can be used anywhere that C is traditionally used, such as for embedded devices, graphics drivers, operating systems, etc. By all rights, OCaml should have taken over the world by now, but hardly anyone heard of the language yet alone used it. This is a subjective question, but why have OCaml and ML other languages remained so obscure, while C and other languages became popular? | I think the problem with OCaml is that it isn't too useful "out of the box". The eventual reason why people use a language is because it has libraries they need. With nothing "out of the box", though, nobody gets far enough into a project to realize that they need to write a library. The result is a language with no libraries, which makes it hard to write "real apps". I think this is what OCaml suffers from -- nobody bothers to start "real projects" in it because all there is is a programming language. Yay, I can add two and two and print the result. The result is a collection of libraries that are mostly academic abandonware (the author got his PhD and moved on), which isn't too helpful for practicing programmers. (I know there is work under way to change this, with projects like "Batteries Included". Come back here in 5 years, and perhaps OCaml will be more popular.) There are some exceptions to this rule. Java started off with no libraries, but Sun paid people to write them all in house, and then they marketed the hell out of it. Java certification, Java-specific hardware, Java books, Java classes, etc. Then even convinced most universities to teach it exclusively, even though it isn't a very good language to use for learning programming. The result was popularity. Money can solve a lot of problems. Over in the functional language arena, we can see that Haskell is becoming quite popular. I think most of the popularity is due to people like dons that write useful libraries, and never stop marketing the langauge. Every day you see a few Haskell articles on Programming Reddit. This keeps it stuck in peoples' minds until they finally decide, "I am going to try Haskell." When they do, they see useful things like web frameworks, object databases, OpenGL libraries, and XML processing libraries. This means that they can actually do something useful "Right Now". So between the potential to be productive and hearing about it a lot, Haskell has gained a lot of popularity. CL has many of the same libraries as Haskell and is almost as fast, but nobody talks about it, so it "feels dead". Indeed #lisp is much quieter than #haskell, but Lisp is still a very productive language with a lot of libraries. No other language has SLIME. But marketing is very important, and Haskell does it better than Lisp or OCaml (and competes for the same userbase). Finally, some people will never "get" programming, so breaking their mental model (variables are boxes with values, code executes top-to-bottom) will ensure that they don't use your language. This type of programmer is a large percentage of the programming population, so this further limits the possible userbase of abstract languages like Lisp, Haskell, and OCaml. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/62685",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/1467/"
]
} |
62,705 | In the setting of an interview: What is the best way to reliably identify when somebody is an excellent programmer . By this I mean he is one of those that is 10-15 times more efficient / rapid / better than his peers towards the lower end of the spectrum. Many of us have heard of the FizzBuzz Problem as a way to weed out the weak ones. Certainly, taking 5-10 minutes to solve that problem is a serious indicator that an applicant is a weak candidate. I posit that a good indicator is being able to solve that as quickly as you can write. This doesn't seem sufficient, though. Is it maybe something like giving him a moderately complicated buggy program, and seeing how fast he can grok it and identify all the problems with it? | My apologies to anyone who doesn't care for lengthy answers, but I do think it's pretty important to qualify your candidates before you hire them. Anyone who's conducted a significant amount of interviews in this industry knows that most candidates won't last through the first 15-30 minutes of an interview, so most of this list won't be necessary. Just keep in mind how expensive it is (both financially and emotionally) to fire someone before you dismiss my list as overkill. I've tried to list my interview topics here by order of importance. General Intelligence (brain teasers/logic puzzles) Counting toggled lockers Crossing bridges Weighing marbles Burning fuses Computer Science Knowledge Data structures ( stacks , queues , lists , trees ) Algorithms ( sorting , search , shortest path , Euclidean algorithm , Hamiltonian cycle , data compression ) Recursion /Tail Recursion versus Iterative solutions Models of computation / Finite state machines String matching / Regular expressions Backus-Naur Form Programming exercises GCD , Factorial , Fibonacci , Towers of Hanoi String and list reversal Determine if a singly-linked list has a loop (can you do it with only two pointers?) Find the bug Knowledge of object-oriented programming techniques and common design patterns Algorithm analysis (run-time O(n) complexity and storage requirements) Usage of tools and methodologies Debugging Version control Test-driven development Coverage analysis Continuous integration Knowledge of common security vulnerabilities and attacks Buffer / heap overflow SQL injection Cross-site scripting attacks Basic Mathematics Numeral systems (convert from one base to another) Probability Theory Distance between two points on Cartesian plane (Pythagorean theorem) Square root (Heron of Alexandria, successive approximation) Cryptography Public key cryptography Symmetric-key cryptography Hash functions Cryptographic protocols (secret sharing, zero-knowledge proofs) Discrete Mathematics Logic Set theory Graph theory Information theory Combinatorics Proofs (like existence of irrational numbers, infinite primes) You might also want to take a look at the book Programming Interviews Exposed . It's a good reference on the topic. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/62705",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/6482/"
]
} |
62,727 | Are there types of killer applications, classes of algorithmic problems, etc., where it is better, in the long run, to create my own language? PS: Just to be sure, I mean a new programming language and a compiler, not a new compiler for an existing language. EDIT : Thank you for the answers. Can you provide some examples, where it is absolutly unnecessary to create a DSL or cases in which a DSL might be a good idea? | It certainly is relevant for a person to write their own language for educational purposes. To learn about programming language design and about compiler design. But real-world uses are few and far between. In writing your own language you are: Adding a tremendous amount of complexity to your problem Adding a significant amount of work in writing and maintaining the new language and compiler So, if you plan to write your own language for your project then the features that it provides that other languages don't have need to offset the above costs. Take games development for example. They often need mini-languages within their games or scripting languages. They use these languages to script out a huge amount of the in-game events that happen. However, even in this case, they almost always choose existing scripting languages and tailor them to their needs. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/62727",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/4839/"
]
} |
62,817 | In many companies there is a formal procedure of reviewing employees' work. For example, a salesperson can stay she'll sell one million units at the beginning of the year. When she comes up for review a year later, she says she's sold two million units. Thus, her manager decides to promote her. But what should developer say? I'll fix a million bugs, I'll write a hundred unit tests? I can't imagine many things that can be measured here, especially If you don't have a roadmap for year and if you working on maintenance. What types of solid performance metrics work for programmers? Or are performance measurements not applicable for developers at all? | Does anybody knows non-stupid objectives for programmers and how you can explain progress on them. No. Or performance measurement procedure not applicable for developers at all? What does a programmer produce? Really. What do they produce. Anything you think a programmer produces can be "gamed". "Code"? Copy and paste will run up the numbers. "Bug Fixes"? Easy to introduce and fix bugs just to run up the numbers. "Meeting the schedule"? Easy to over-estimate and always be early. "Meeting the budget"? Hard for a programmer to control. But if you insist on it, they simply stop working when the run out of money and leave you with a product that might not work well, but will have the exact cost. "Few defects"? Easy. Write very little code. Do lots of analysis and design and planning. There are two kinds of metrics here. "Do More" and "Do Less". More code. Fewer defects. Any "more" metric is gamed by copy and paste techniques to simply make the numbers. An "less" metric is gamed by simply doing less of everything. If you think a programmer produces "intellectual property" or "value" you find that these are very, very hard to measure. For example. Value should be measured by the dollar value to the business. Since every dollar the business makes is touched by software, then 100% of revenue is created by programmers. That doesn't work out well, because you can't easily separate software from the rest of the business processes. Intellectual Property (the knowledge embedded in software) is even harder to quantify. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/62817",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/-1/"
]
} |
62,885 | In a response to another question , a poster suggested that under the GPL: ...you need to provide the human readable [code], not a whitespace stripped version... Readability would seem to me to be subjective and unlikely to be explicitly required by the GPL. Is it? | The GPL requires that it be the preferred version for editing. If you normally write in obfuscated code, and make changes directly in it, then that's the source for GPL. If you work on a readable version, and then run it through any sort of obfuscator, the readable version is what the GPL considers the source. "Readability" is subjective and not defined. It is legal to release really bad, hard to understand, code under the GPL. It is not legal to take the version that you make changes in, remove the whitespace or otherwise make it less readable, and call that the source under the GPL. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/62885",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/5402/"
]
} |
62,948 | The misunderstanding of floating point arithmetic and its short-comings is a major cause of surprise and confusion in programming (consider the number of questions on Stack Overflow pertaining to "numbers not adding correctly"). Considering many programmers have yet to understand its implications, it has the potential to introduce many subtle bugs (especially into financial software). What can programming languages do to avoid its pitfalls for those that are unfamiliar with the concepts, while still offering its speed when accuracy is not critical for those that do understand the concepts? | You say "especially for financial software", which brings up one of my pet peeves: money is not a float, it's an int . Sure, it looks like a float. It has a decimal point in there. But that's just because you're used to units that confuse the issue. Money always comes in integer quantities. In America, it's cents. (In certain contexts I think it can be mills , but ignore that for now.) So when you say $1.23, that's really 123 cents. Always, always, always do your math in those terms, and you will be fine. For more information, see: Martin Fowler's Quantity and Money patterns His books Patterns of Enterprise Application Architecture and Analysis Patterns Wikipedia on Banker's Rounding Answering the question directly, programming languages should just include a Money type as a reasonable primitive. update Ok, I should have only said "always" twice, rather than three times. Money is indeed always an int; those who think otherwise are welcome to try sending me 0.3 cents and showing me the result on your bank statement. But as commenters point out, there are rare exceptions when you need to do floating point math on money-like numbers. E.g., certain kinds of prices or interest calculations. Even then, those should be treated like exceptions. Money comes in and goes out as integer quantities, so the closer your system hews to that, the saner it will be. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/62948",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/241/"
]
} |
62,994 | I started with ActionScript 2.0 and then went on with Java. I have learned, or at least used, a bunch of languages since then, including Python (probably my favorite). I'm afraid that my style of object oriented programming is very unpythonic though, and more like Java OOP with Python syntax. What makes Java like and Pythonic OOP differ from each other? What things do Java programmers often do "unpythonically" when writing object oriented code in Python? | For a Java guy Python is an anarchic playgound where anyone can grab a club and start mauling your head. For a Python guy Java is a an Orwellian universe where you are constantly shackled to someone else`s diminishing view of how the universe tick. The truth is anything you can do in one language you can do in the other just as cleanly. However as you have mentioned there are important difference in both community as to what clean means. Java way : A clean system is one that does what is meant to and nothing else, it will not allow extensions or modifications that go against the nature of the intended purpose and will attempt to enforce these as much as possible through the compiler. Flexibility is obtained through careful crafting of simple interfaces within strict structures. In Java one`s sand box should always be clearly bounded and overstepping these met with swift feedback from the compiler. Java provides means to statically define object structures and create dynamic interactions from instances of them. When I work in Java I try to cleverly create basic building blocks towards a brain dead solution. I mostly work bottom-up once I have a working theory on how to tackle the problem. Java will tend to produce large software that can span large teams and provides tools and means to keep the flock in check. If kept unchecked this will lead to very detached teams working independently towards an ever more unclear goal. Eventually each team becomes it`s own "raison d'Γͺtre" and the system as a whole becomes diluted driving astray the main project. These can lead to extreme cost overruns and huge software systems that perform and maintain poorly. There is almost never a small quick and easy way to do things in Java, but the IDE and the tooling are there to make painful tasks a mere few clicks away. Python way: Clean means concise and easily readable. A good python system is designed to let you get right to the heart of it and exposes it`s innermost secrets in a way that you can understand from the code the intended use and purpose of it. It will also allow you to design your own solution around by extending and/or encapsulating the original design so that it will go exactly in your direction. Python provides means to create object templates from which you can dynamically change the instance to fit the needs at hand. In python I tend to tackle the problem right away and then spread the code in a logical structure such that the final solution remains as simple and readable as can be. In python I tend to work top-down and manage the increase complexities through a divide-and-conquer approach. Python teams will tend to produce light systems and be very fast in delivering a working solution. They will tend to be a close knit bunch working interchangeably on any part of the system validating each other's solution every chance they get. They feed on each other creating a synergy that is quite exhilarating. However this creates teams that are difficult to scale to larger systems and often hit a sort of glass ceiling. Introducing new members in the team will help but it will take some time for the knowledge to spread around enough for the extra productivity to be felt. The team then becomes divided and the constant overview over the whole system dilutes as does the atmosphere of the early days. This can lead to overly convoluted code to what once was a simple problem, extreme cost overruns and systems that perform and maintain poorly. There is almost always a quick and easy way to do things with Python but complexity can be harder to keep in check once the system reach a certain threshold. In short, both have a dark side and both have clear strength. However, when prodding along both communities you will find that the strength of one leads to the dark side of the other and vice versa. Hence the heated debates as to which is best. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/62994",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/12750/"
]
} |
63,005 | I'm preparing questions for job interviews for a senior development position. The job would include object oriented design, and the existing software uses design patterns, so I'd like to ask the candidates to explain a few design patterns they know, they've used, how they've used them, why they've used them and so on. However, in previous interviews when I asked senior developers with at least 5-10 years experience about design patterns, almost none ever heard of them. I think two out of twenty developers could name a single design pattern (Singleton and MVC, respectively). So my question is: does it make sense to ask these questions? Or is this such an obscure subject that you cannot expect new hires to know them already? Should a senior developer have prior experience with design patterns, or would you say that design patterns are such a simple topic that every decent developer can pick them up during training? If so, what questions would you ask instead to gauge their design abilities? Add After reading the answers so far, I should give a few clarifications: The job is for a .NET developer with experience in OOP/OOD The existing code uses class names like IParameterGraphVisitor and IStorageFactory in many places How do you ask people about their past experiences with OO designs they created, if they don't have the vocabulary to explain their designs? That's what I want to do, and all I can come up with is "please you draw the design/object hierarchy of your last project on the whiteboard". | Chances are they do know them. They just may not know them as 'design patterns'; ie, they may not be familiar with the academic terminology for such things. What you view as a 'state machine' might simply be a common sense approach to a problem to an older, more experienced programmer. I never paid much attention to 'design patterns', for example, but when I learned what a State Machine was, I had to laugh because I'd been doing that for years. Who knew I was such an academic? I always just considered it basic coding skill, and not a 'design pattern'. The point is not assuming your experienced developers know the text-book terms for things; instead, ask them how they would structure classes, or how they would approach a task. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/63005",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/14237/"
]
} |
63,028 | What is better for performance to write the loop as linear e.g. for , while or
write it as recursion ? | Depends on a lot of factors. For the vast majority of applications, whatever is easier for a human being to understand is the proper choice. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/63028",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/21461/"
]
} |
63,042 | I've been a freelance developer for many years and am setting up a system designed to attract clients through one of my developer websites. If successful, these would be clients I do not know, and some jobs may be very small. When working with clients I know on larger projects, I'm accustomed to doing the work before I get paid. But, here, that seems risky. I'm looking for suggestions on the payment schedule I should require for small projects. I suspect potential clients will not be willing to pay the full amount up front, and so the solution will probably be some sort of compromise. | Up-Front + Milestones Typically, you should have some payment up-front to begin work. Then, have milestones where you deliver some part of the project to them, or show an update to progress where the next payment is due. This way you have the incentive to keep working on the project, because you don't get paid the full amount until you deliver, and they have the incentive to actually pay you along the way because they won't get the full product if you walk out because they stopped paying you mid-stream. As with anything, this is negotiable, and good-will comes in to play depending on clients/relationships, etc. Milestone and upfront payment percentages can vary widely, so use your own discression as to both what's fair and to what degree you are willing to risk not getting paid. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/63042",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/17425/"
]
} |
63,174 | Ever heard of a company awarding bounty points for bugfixes? Making team members work to do the most, as some bonus money will be dependent on it? Having them split, or pay bounties to others to test, so they don't 'lose' points when it reaches a UAT environment? I heard this from management for the first time today. To me, it sounds like a pretty ridiculous approach to software development. It sounds as if the team members will be competing with each other in a non-productive way, essentially slashing the productivity they've got going now. Any thoughts? Have you ever seen something like this actually work? Did it help or hurt the team? UPDATE 4/6/11: This approach was cancelled due to feedback from the whole team! I'm happy to learn that it wasn't just me thinking it wasn't a great idea. | Yes, I really have seen this. . | {
"source": [
"https://softwareengineering.stackexchange.com/questions/63174",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/21525/"
]
} |
63,250 | I consider myself a high level software developer. I enjoy reading a lot, and it's helped me over the course of my career. I think I am doing well. Right now, I spend a lot of time learning new things. I don't suck when it comes to writing code right now, but I'm about to start a family, and I regularly see many seniors with 14-15 years of experience whoβbecause they cut back on learning new thingsβnow suck at programming. They were inspiring figures at some point of time, but they are not anymore. You might argue that basics never change, but it does appear to make a difference when you are coding on Delphi for 10 years and suddenly everyone is using the .NET framework. It's true that an experienced developer will take less time when learning a new framework, but it still does demand time and effort. How does a software developer manage the demands of the job while still being able to concentrate on things that necessarily take you out of the job, like starting a family? | Something you said stood out: "I regularly see many seniors with an experience of 14-15 years... they now suck at programming". That's a pretty broad brush stroke you are using to paint people with experience. I'd like to point out a couple things to consider: Younger/less experienced practitioners love to point out how their seniors fail to do X or Y, when they fail to understand that experience has shown that those were bad ideas. Yet each new generation of practitioners seems to want to repeat those mistakes. That phenomenon is common to all professions, not just programming. Not all people who have been working a number of years are experienced, mature, or good. It takes effort to become better. A lot of effort put in when you are younger builds a good body of experience from which you can draw on later. Perhaps the people you are referring to were never good. It's just as possible that they are looking at you thinking, why do you insist on doing things the hard way? It is true, however, that as you start a family you have much less time to keep up with new toys. You actually have less time as your kids get older than you do when they are younger. Toys don't make you a better programmer. Neither do tools. What makes you good is the ability to break down problems and build a working solution. What makes you great is the ability to teach others to be good. That's where experience starts to shine. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/63250",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/4779/"
]
} |
63,316 | I'm looking for a good analogy or metaphor that could illustrate the problems of copy-paste programming to non-programmers. I occasionally do code/system reviews for potential clients, and one of the common problems I see are vast amounts of copy-paste code all over their code bases. It's something I routinely call out in the reviews, and each time I have to explain why this is a problem (this is especially difficult with clients who know just enough about programming to understand that reuse is a good thing, but not enough to understand why copy-paste isn't a good form of reuse).
Obviously, I can (and do) explain the problem in terms of code maintenance, but it would be nice to have a good, concise analogy for this problem that would hit home with non-programmers. Bonus if the analogy illustrates why search-and-replace is not an effective solution for this problem.
Any suggestions? Just to clarify (based on Jaroslav's answer below) - I'm not talking about using code snippets here; what I see (disturbingly often) is copying-and-pasting of vast swaths of code, or a ten-line piece of code to get some user data (complete with inline SQL query) pasted into dozens of PHP or ASP.NET pages. So, duplicate code from elsewhere in the same project. Update: There are several really good answers here; I've explained in the comments why I chose Scott Whitlock's answer, but I would also highly, highly recommend whatsisname's answer if you're dealing with customers who are familiar with manufacturing at all. | It's like this... you have one clock in your house. Great! You know what time it is, but you always have to go to that one room to look at it. But of course you want to know what time it is without going to that room all the time, so you buy some more clocks, and you distribute them around your house. Each of these clocks are independent. They all keep their own time. This means: When the time changes due to daylight savings time, you have to change all of them Even when they're all set, they're all a bit different and rarely agree perfectly. Over time they drift. Now imagine the same problem in a large facility with dozens or hundreds of clocks. That's why you need something like this networked clock that keeps itself in sync with a central time base. That way the time is defined once and only once . Copy-paste programming is like buying more independent clocks. It doesn't scale. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/63316",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/2498/"
]
} |
63,416 | Projects that increase online anonymity, such as Tor and bitcoin, allow personal freedom from government oppression but also allow crimes to go unpunished. As a developer, should I contribute to such projects?
Is there a way to enforce that those technologies will not be used for money laundering and child pornography? | This sounds like the principle of double effect . This is when a person takes an action that has two consequences, one positive, and one negative. There are four conditions that are generally needed for the action to be considered moral: The action itself must be good or neutral. Developing code for anonymity meets this condition. The bad effect must not me the means by which the good effect is achieved. Anonymity (the good effect) is not achieved by people committing crimes. So again, this is fine. The intention must be the good effect, not the bad effect. You clearly don't intend for people to abuse your code, and want them to use it for good. So this is okay, again. The good effect must be at least as important as the bad effect. This is the only one I can see being even questionable in your situation. In other words, the final question is: Do you think that, overall, more good will be done with software for anonymity than harm? If so, you are in the right to continue to develop it. Personally, I think software for anonymity probably does more good than ill, but I'm no expert. I don't think the "why not, if you don't, someone else will do it" argument holds water. If developers hold themselves to high standards of ethics, unethical software will be written more slowly and ethical software to defend against it will have a better chance of doing its job. Also, writing unethical code numbs us so we are less likely to recognize future ethical dilemmas and slowly degrades our personal dignity. However, I don't think that this is a case where you need to be concerned; you will be working to make this software for good, with good reason to think it will do primarily good. You are in the right for the same reason that a person making a taser designed for self-defense is in the right. Sure, it could be misused - but in general, it is a tool designed for good. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/63416",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/2328/"
]
} |
63,549 | In the real world , why do we need to implement method level security ? We either have a web application or a desktop application , where the user accesses the user interface (and therefore directly cannot access the method) . So where does accessing methods directly come into picture here ? edit : I ask this question because I am experimenting with spring security , and I see authorizing users for accessing methods . something like : @ROLE_ADMIN
public void update() {
//update
} | In a properly designed application the backend and frontend are disconnected.
The backend security system can't assume any specific frontend will correctly handle security, so it has to handle it itself. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/63549",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/17887/"
]
} |
63,859 | Python 3 was released in December 2008. A lot of time has passed since then but still today many developers hesitate to use Python 3. Even popular frameworks like Django are not compatible with Python 3 yet but still rely on Python 2. Sure, Python 3 has some incompatibilities with Python 2 and some people need to rely on backwards-compatibility. But hasn't Python 3 been around long enough now for most projects to switch or start with Python 3? Having two competing versions has so many drawbacks; two branches need to be maintained, confusion for learners and so on. So why is there so much hesitation throughout the Python community about switching to Python 3? | Note that I'm no longer updating this answer. I have a much longer Python 3 Q & A on my personal site at http://python-notes.curiousefficiency.org/en/latest/python3/questions_and_answers.html Previous answer: (Status update, September 2012) We (i.e. the Python core developers) predicted when Python 3.0 was released that it would take about 5 years for 3.x to become the "default" choice for new projects over the 2.x series. That prediction is why the planned maintenance period for the 2.7 release is so long. The original Python 3.0 release also turned out to have some critical issues with poor IO performance that made it effectively unusable for most practical purposes, so it makes more sense to start the timeline from the release of Python 3.1 in late June, 2009. (Those IO performance problems are also the reason why there are no 3.0.z maintenance releases: there's no good reason anyone would want to stick with 3.0 over upgrading to 3.1). At time of writing (September 2012), that means we're currently a bit over 3 years into the transition process, and that prediction still seems to be on track. While people typing Python 3 code are most regularly bitten by syntactic changes like print becoming a function, that actually isn't a hassle for library porting because the automated 2to3 conversion tool handles it quite happily. The biggest problem in practice is actually a semantic one: Python 3 doesn't let you play fast and loose with text encodings the way Python 2 does. This is both its greatest benefit over Python 2, but also the greatest barrier to porting: you have to fix your Unicode handling issues to get a port to work correctly (whereas in 2.x, a lot of that code silently produced incorrect data with non-ASCII inputs, giving the impression of working, especially in environments where non-ASCII data is uncommon). Even the standard library in Python 3.0 and 3.1 still had Unicode handling issues, making it difficult to port a lot of libraries (especially those related to web services). 3.2 addressed a lot of those problems, providing a much better target for libraries and frameworks like Django. 3.2 also brought the first working version of wsgiref (the main standard used for communication between web servers and web applications written in Python) for 3.x, which was a necessary prerequisite for adoption in the web space. Key dependencies like NumPy and SciPy have now been ported, installation and dependency management tools like zc.buildout , pip and virtualenv are available for 3.x, the Pyramid 1.3 release supports Python 3.2, the upcoming Django 1.5 release includes experimental Python 3 support, and the 12.0 release of the Twisted networking framework dropped support of Python 2.5 in order to pave the way for creating a Python 3 compatible version. In addition to progress on Python 3 compatibility libraries and frameworks, the popular JIT-compiled PyPy interpreter implementation is actively working on Python 3 support. Tools for managing the migration process have also improved markedly. In addition to the 2to3 tool provided as part of CPython (which is now considered best suited for one-time conversions of applications which don't need to maintain support for the 2.x series), there is also python-modernize , which uses the 2to3 infrastructure to target the (large) common subset of Python 2 and Python 3. This tool creates a single code base that will run on both Python 2.6+ and Python 3.2+ with the aid of the six compatibility library. The Python 3.3 release also eliminates one major cause of "noise" when migrating existing Unicode aware applications: Python 3.3 once again supports the 'u' prefix for string literals (it doesn't actually do anything in Python 3 - it's just been restored to avoid inadvertently making migrating to Python 3 harder for users that had already correctly distinguished their text and binary literals in Python 2). So we're actually pretty happy with how things are progressing - there are still nearly 2 years to go on our original time frame, and the changes are rippling out nicely through the whole Python ecosystem. Since a lot of projects don't curate their Python Package Index metadata properly, and some projects with less active maintainers have been forked to add Python 3 support, purely automated PyPI scanners still give an overly negative view of the state of the Python 3 library support. A preferred alternative for obtaining information on the level of Python 3 support on PyPI is http://py3ksupport.appspot.com/ This list is personally curated by Brett Cannon (a long-time Python core developer) to account for incorrect project metadata, Python 3 support which is in source control tools but not yet in an official release, and projects which have more up to date forks or alternatives which support Python 3. In many cases, the libraries that are not yet available on Python 3 are missing key dependencies and/or the lack of Python 3 support in other projects lessens user demand (e.g. once the core Django framework is available on Python 3, related tools like South and django-celery are more likely to add Python 3 support, and the availability of Python 3 support in both Pyramid and Django makes it more likely that Python 3 support will be implemented in other tools like gevent). The site at http://getpython3.com/ includes some excellent links to books and other resources for Python 3, identifies some key libraries and frameworks that already support Python 3, and also provides some information on how developers can seek financial assistance from the PSF in porting key projects to Python 3. Another good resource is the community wiki page on factors to consider when choosing a Python version for a new project: http://wiki.python.org/moin/Python2orPython3 | {
"source": [
"https://softwareengineering.stackexchange.com/questions/63859",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/20259/"
]
} |
63,890 | I'm taking a software design class where I should choose an open source software to analyze from the Software Design point of view. It has to be a big project: not less then 100,000 lines of code. I would really like to choose a software that is very well designed and architected to have good insights on good software design. By good design I mean things like meaningful classes and architecture, good use of (design) patterns, good use of abstraction, good organization of components, high cohesion and low coupling between components, etc... Do you have any software to suggest me? Note that the software just need to have a good design, the design does not need to be documented! :) It does not need to be an application for the end user... It can also be a library, a tool, etc... | By good design I mean things like meaningful classes and architecture,
good use of design patterns, good use of abstraction, good
organization of components, high cohesion and low coupling between
components First, a software, good or bad, doesn't live in solitude β it
models a real world scenario which humans conceive of as a problem and
thus is always associated closely with something called an
"application domain". So, whenever you talk about software, first know
and study the domain β for only then you can attain the
discretion of good and bad. git β not just good, but an amazing design. It is not a
version control at its core, just a file system. A thin veneer
of functionality on top of the core makes it a version control
system. Get to know the internals of git, and your sense of
software design will be enlightened. jQuery β not a very well (internally) documented library,
but an inspiring source demonstrating how client side JavaScript
code can do wonders. NodeJS β if you're into making servers this project has
refreshingly new ideas and patterns to offer. v8 β very good C++ code, fantastic library to learn/study
virtual machine implementations. NoSQL projects β Couch, Mongo, Redis, Cassandra β these projects demonstrate smart ways to solve persistence
problems. Also they embrace the idea of polyglot persistence. Boost libraries β good dose of C++. OpenStack β very good projects on cloud computing and virtualization. The Apache Software Foundation β Choose any
of their projects and study it. HTTPd's modular structure is
a great source if you want to see how components come together. APR
(apache portable runtime) β a really good lib also. mod_wsgi β one of the best C programs I've come across. "good use of design patterns" β it is NOT important for the code
to correspond to a well known design pattern β it is more
important that it solve the problem "smartly" β that it is
maintainable, reusable and readable. If code is crammed into a
particular "shape" β just to adhere to a design pattern β
it can be bad code. "not less then 100,000 lines of code" β since when did the
number of lines become a metric of good quality β getting a
taste of "well designed/architectured software" doesn't require it to
be BIG. Again, remember to study the nature and nuances of the problem domain
first, and then delve into reading the code. UPDATE: Oct. 2015 InfluxDB -- https://influxdb.com/ This Go project is under active development, and is still NOT very complex. So you can get started with digging into code relatively easily than something like OpenStack. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/63890",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/21729/"
]
} |
63,908 | The question about default values in general - default return function values, default parameter values, default logic for when something is missing, default logic for handling exceptions, default logic for handling the edge conditions etc. For a long time I considered default values to be a "pure evil" thing, something that "cloaks the catastrophe" and results in a very hard do find bugs. But recently I started to think about default values as some sort of a technical debt ... which is not a straight bad thing but something that could provide some "short term financing" get us to survive the project (how many of us could afford to buy a house without taking out the mortgage?). When I say a "short term" - I don't mean - "do something quickly first and do refactor it out later before it hits the production". No - I am talking about relying on a hardcoded default values in a production software. Granted - it could cause some issues, but what if it only going to cause a single trouble in a whole year. Again - I am talking about the "average" mainstream software here (not a software for a nuclear power station) - the average web site or a UI application for the accounting software, meaning that people lives are not at stake, nor millions of dollars. Again, from my experience, business users would rather live with the software which "works somehow", rather then wait for a perfect one. And the use of default values helps a lot if you develop a software in a RAD style. But again - the longest debug sessions I have spent were because of the bugs introduced by a default value which either stopped being "a default" along the way or because a small subsystem has recently been upgraded and as a result of this upgrade it does not handle the default correctly (e.g. empty list vs null, or null string vs empty string). So my question is - are the default values good or evil. And if they are a technical debt - how do measure up how much you can borrow so you can afford the repayments? | Take for example a library that implements the FTP protocol. By default FTP is expected to operate on port 21. Now I would be very irritated if I had to specify to use port 21 every time I construct a object of a random FTP class. If I need a different port, let me specify. Defaults are perfectly fine when they are sane defaults. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/63908",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/-1/"
]
} |
64,018 | Recently I was asked to show "a page with code" for a job interview. Being mainly a back-end programmer, and that's the position I applied for, I first said to the person I was talking to exactly that: PHP is executed at the server and therefore not visible by just giving a "page". However, following their desire, I sent links to the pages I've worked on before. Obviously they couldn't see anything except for the HTML, CSS, JS... They said it was not enough, they could not see the PHP. Understanding that they probably just wanted to know my skills and/or interest I sent them my Stack Overflow profile. Among all my questions and answers, most of them with code, certainly the PHP is there. But it seems this is not what they wanted. Well, I don't have any code put together that I can simply publish for someone to see. And I would never do it for the code I have deployed, obviously. So my question is/are: What does "send us a page with code" mean? What should I send? Is this a typical interview requirement? | It means send some source code. It is a quite common interview candidacy request. You should do it. It doesn't have to make much sense to them. They just want to see some basic flow and good coding style. A long time ago during my job searching, I solved a bunch of old ACM programming contest questions in a variety of languages. I use those for code samples. Regardless how this job prospect turns out for you, I'd recommend putting together some samples for your next prospect. When we've considered people in the past, I always ask to see some code. I don't even bother compiling or executing it or anything, I am more interested to see structure, commenting, and that it doesn't look like this sort of stuff . | {
"source": [
"https://softwareengineering.stackexchange.com/questions/64018",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/14342/"
]
} |
64,248 | How would you distinguish the man from the machine? | I'd just ask him "If you could pose a question to a turing test candidate, what would it be?". | {
"source": [
"https://softwareengineering.stackexchange.com/questions/64248",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/18476/"
]
} |
64,388 | ...One of the most important features a new employee should have is compatibility with the spirit of the people who already work there. {...} I am convinced that gaining insight into the developer's real personality is just as important as checking for professional competence, because one bad fit can destroy an entire team. from Hiring Developers - You're Doing It Wrong Is this true? And, if so, is it true for managers too? | More than you realize. Think about it, if you have to fight with someone every day, it gets exhausting. When that happens, only a few options are available to you: Give up and let the bad fit do what they want all the time, even if it destroys the project. Leave the team/company to avoid the bad fit. Fire the bad fit and hire someone you can work with. Continual arguing demoralizes the team, and having to justify every little pedantic point sucks up real productivity. That one person is like that with everyone, so you might see the attrition rate of your team increase while the irritant is still there. It gets tricky with managers, because you don't have the authority to fire them. In an ideal situation if someone is acting up, sit down with them and tell them nicely but firmly that their actions are destructive. Explain how you would like to see their behavior change to help the team get along better. If they receive it well, great. If not, resort to one of the three options above, because it'll get worse. One thing's for sure, unless you bring it to their attention, there's no chance of change. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/64388",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/1996/"
]
} |
64,394 | Can having too many senior programmers in one team turn out to be a bad thing? Having like say, 4-5 senior programmers in a team of 6-7 people.
What is the optimal number/ratio in these kind of situations? Can this lead to too much philosophy and arguments about ideas? Has anyone had such an experience, that can share it with me? | If I could choose I would have 6-7 seniors on a team ( assuming the project needs that many ). The only time I can see this being a problem is if the seniors are only senior in self perception and not work ethic. There is nothing better than working with a group of people who appreciate that every piece of software development is important - the documentation, the planning, the code, the coffee, it all matters and it takes mature ( real senior ) developers to be "above nothing" and get the job done properly. EDIT : Many other answers have said that too many leaders is a problem - but why is there a perception that a senior must lead? A senior should be mature enough to pick a leader and follow. It is the project that matters - pick / get a role and rock it silly ! | {
"source": [
"https://softwareengineering.stackexchange.com/questions/64394",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/18933/"
]
} |
64,449 | Whenever I find myself writing the same logic more than once, I usually stick it in a function so there is only one place in my application I have to maintain that logic. A side effect is that I sometimes end up with one or two line functions such as: function conditionMet(){
return x == condition;
} OR function runCallback(callback){
if($.isFunction(callback))
callback();
} Is this lazy or a bad practice? I only ask because this results in a greater number of function calls for very tiny pieces of logic. | Hehe, oh Mr Brown, if only I could persuade all the developers I meet to keep their functions as small as this, believe me, the software world would be a better place! 1) Your code readability increases ten fold. 2) So easy to figure out the process of your code because of the readability. 3) DRY - Don't Repeat Yourself - You're conforming to this very well! 4) Testable. Tiny functions are a million times easier to test than those 200 line methods we see way too often. Oh and don't worry about "function hopping" in terms of performance. "Release" builds and compiler optimisations take care of this for us very nicely, and performance is 99% of the time somewhere else in the systems design. Is this lazy? - Very much the opposite! Is this bad practice? - Absolutely not. Better to pulled this way of making methods than the tar balls or "God Objects" that are oh so too common. Keep up the good work my fellow craftsman ;) | {
"source": [
"https://softwareengineering.stackexchange.com/questions/64449",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/21907/"
]
} |
64,685 | I think most web developers will agree that it is often easier getting something to work well in most browsers, but not as much in IE. I was wondering: When designing/developing a website, how important do you think IE-friendliness is? I mean, how worth it is it to bust yourself trying to make a website that works perfectly in major browsers work the same in IE? | If you are talking about anything, that isn't just your small, private programming-related site, then: Yes, you need to support the IE. If you develop a commercial site and it doesn't display in IE, you are going to lose many potential clients. Furthermore: Really, it isn't that much work to make your designs IE compatible (unless you want to support IE6, which I personally don't do). Your site doesn't have to look exactly the same. But the basic functionality should be there. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/64685",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/20127/"
]
} |
64,727 | From a developer point of view which platform would you consider for a large social web application? If you could provide some details on what you consider to be the strengths of which alternative it would be great. | I've written the same app on GAE (Python and now Java) and Azure. I'll probably continue to use both, for different things. Here are a few thoughts that I'll keep updating: Reasons to use GAE: You essentially get one free VM's worth of use per day. With Azure, you pay almost $100 each month, even if you don't have a single website visitor. If your db goes over 1GB, you pay an extra $90 ($9->$99) for storage. Update: Azure now has various VM and DB sizes at different price points. Details here . GAE's payment is reasonably fine-grained - most resources are charged per request/GB/MB, again with a free daily allocation for most resources. However, in November 2011 it joined Azure and AWS in charging for the web server per instance-hour. Details here . GAE has the lightest admin load. Once you're setup, deploying and re-deploying is quick and they'll auto-everything. For example, you don't worry about how many servers your app is using, how to shard the data, how to load-balance. Mail just works. At the time of writing, Azure doesn't offer SMTP out so you need a 3rd party server. Great integration with many of the Google offerings - calendars, mail, whatever. You can delegate user management to Google if you don't want control over your user base. With GAE you know any features they add to the store, you'll get. With Azure, you get the feeling Sql Azure Database will get most of the love but it'll be more expensive. Azure Storage is likely to have the most gotchas. No relational integrity, no order-by, you'll fiddle with the in-memory context more. GAE's store has far fewer restrictions and more features than Azure Tables. Good choice if you're using Python or JVM-based languages already. Many languages compile to Java bytecode nowadays. Updating the app is very fast. For Python, I had a shortcut key setup and it took no time at all. I now use the Eclipse Plugin for Java and it works very well. Azure is more fiddly. A locally tested app will probably run on the cloud without (much or any) changes. With Azure, the config is different and I spent some time stopping-deleting-building-uploading-starting before I got it right. GAE has a great UI that includes a log viewer a data editor. With Azure, you currently have to find external viewers/editors for this. GAE lets you have multiple versions of your application running on the same datastore. You can deploy, test a version and then set the current 'live' version when you're ready. You can change back if something goes wrong. Reasons to use Azure: The performance characteristics and cost implications of App Engine's datastore will surprise you. If you do anything other than simple CRUD you'll need to work harder than you would with a normal DB. No ad-hoc queries. Azure has two approaches to storage, offering more choice. They are SQL Azure Database (SAD) which is a relational DB, and Azure Storage, which consists of non-relational tables, blobs and queues. If you have an investment in SQL Server then SAD will be easy to move to, but is quite costly and might be less scalable. Update: App Engine has a MySQL API in limited beta. Azure seems to be better designed if you have a SOA-type approach. Their architectures seem to benefit from experience in the enterprise world. GAE seems more focused on simply serving web pages. You can run the app under debug, put in breakpoints, etc. Azure has a "staging" environment where you can deploy to the cloud, but not make it live until you're happy it works. I'm using .Net for other things, and integrating them with .Net on the backend is much easier than with GAE. (Update - using Java on GAE works fine, and the 10-second timeout is now 30 seconds). Integration with many MS "Live" offerings. So, no obvious answers. I'm defaulting to App Engine at the moment because of costs and ease of use. I might use Azure for very MS-oriented apps. I use Amazon S3 for downloads but likely won't use EC2 because I prefer leaving everything under the application level to the experts. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/64727",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/-1/"
]
} |
64,781 | This question relates to a one I asked earlier: https://stackoverflow.com/questions/5448574/under-what-conditions-could-we-justify-an-attempt-to-introduce-a-one-size-fits-al When re-reading "Domain-Driven Design: Tackling Complexity in the Heart of Software" by Eric Evans, I noticed that the possibility of achieving a common terminology (domain language) is taken for granted. Where does a domain language begin? Which code layers (or whatever the grouping) should reflect this domain language? | I've written the same app on GAE (Python and now Java) and Azure. I'll probably continue to use both, for different things. Here are a few thoughts that I'll keep updating: Reasons to use GAE: You essentially get one free VM's worth of use per day. With Azure, you pay almost $100 each month, even if you don't have a single website visitor. If your db goes over 1GB, you pay an extra $90 ($9->$99) for storage. Update: Azure now has various VM and DB sizes at different price points. Details here . GAE's payment is reasonably fine-grained - most resources are charged per request/GB/MB, again with a free daily allocation for most resources. However, in November 2011 it joined Azure and AWS in charging for the web server per instance-hour. Details here . GAE has the lightest admin load. Once you're setup, deploying and re-deploying is quick and they'll auto-everything. For example, you don't worry about how many servers your app is using, how to shard the data, how to load-balance. Mail just works. At the time of writing, Azure doesn't offer SMTP out so you need a 3rd party server. Great integration with many of the Google offerings - calendars, mail, whatever. You can delegate user management to Google if you don't want control over your user base. With GAE you know any features they add to the store, you'll get. With Azure, you get the feeling Sql Azure Database will get most of the love but it'll be more expensive. Azure Storage is likely to have the most gotchas. No relational integrity, no order-by, you'll fiddle with the in-memory context more. GAE's store has far fewer restrictions and more features than Azure Tables. Good choice if you're using Python or JVM-based languages already. Many languages compile to Java bytecode nowadays. Updating the app is very fast. For Python, I had a shortcut key setup and it took no time at all. I now use the Eclipse Plugin for Java and it works very well. Azure is more fiddly. A locally tested app will probably run on the cloud without (much or any) changes. With Azure, the config is different and I spent some time stopping-deleting-building-uploading-starting before I got it right. GAE has a great UI that includes a log viewer a data editor. With Azure, you currently have to find external viewers/editors for this. GAE lets you have multiple versions of your application running on the same datastore. You can deploy, test a version and then set the current 'live' version when you're ready. You can change back if something goes wrong. Reasons to use Azure: The performance characteristics and cost implications of App Engine's datastore will surprise you. If you do anything other than simple CRUD you'll need to work harder than you would with a normal DB. No ad-hoc queries. Azure has two approaches to storage, offering more choice. They are SQL Azure Database (SAD) which is a relational DB, and Azure Storage, which consists of non-relational tables, blobs and queues. If you have an investment in SQL Server then SAD will be easy to move to, but is quite costly and might be less scalable. Update: App Engine has a MySQL API in limited beta. Azure seems to be better designed if you have a SOA-type approach. Their architectures seem to benefit from experience in the enterprise world. GAE seems more focused on simply serving web pages. You can run the app under debug, put in breakpoints, etc. Azure has a "staging" environment where you can deploy to the cloud, but not make it live until you're happy it works. I'm using .Net for other things, and integrating them with .Net on the backend is much easier than with GAE. (Update - using Java on GAE works fine, and the 10-second timeout is now 30 seconds). Integration with many MS "Live" offerings. So, no obvious answers. I'm defaulting to App Engine at the moment because of costs and ease of use. I might use Azure for very MS-oriented apps. I use Amazon S3 for downloads but likely won't use EC2 because I prefer leaving everything under the application level to the experts. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/64781",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/17438/"
]
} |
64,818 | I'm working on a project that has very tight schedule. I don't have much time to code and test (even though I work more than 12 hours everyday, it's still delayed), and the result is very fragile. Its code is also very dilemma. This program is used by all offices in our customer's company, which is located in many countries. I regularly get phone calls at midnight about errors from our user/tester or about them not knowing how to use some features. After three years on this project, I feel very stressed and I can't sleep well because I'm very worried about errors and phone calls. I have a few questions: For three years, all the code I've written is just the perfect usage scenario code (so it break easily). It's poorly designed and doesn't have any unit tests. I have lots of problems because of this fact. Therefore, I want to know whether is it's feasible to write code that works when the project has a very tight schedule? How can I write better code in the same amount of time? How can I clear my mind and don't get worried about work when I go to sleep? | Ban phone calls If your users are across the globe they surely can't expect you to pick up a phone when it's 4 AM in the morning and you're in bed. I would ban phone calls and switch to other means of communication that can serve this scenario better (email or some issue tracking DB). But even at the office make a scheduled phone accessibility schedule. Otherwise you can't do anything during the time you're at the office. This will get you valuable sleep and rest. Tight schedule If this project has been tight scheduled for three years, somebody must have suspected something that things aren't really working. Maybe it's about time somebody tells planners something and especially your users/client and your managers that this is a death-march project. It's been in development for three years, it's delayed and it's full of bugs. Plan should be completely re-evaluated, existing code should be refactored and new features shouldn't be developed until numerous issues are resolved. Order from chaos Establish a development methodology that will make things predictable and bearable for you. If you're a developer then serving phone calls as they come in doesn't allow you do do any work. Every interruption takes you 15 minutes to get back to where you left off. Phone calls should be off . At least on your desk because you're a developer. If you can redirect phone calls to someone else that won't bother you after every call than do that. Establish some sort of incident/bug database. Take some time every morning when you get to work and prioritise new incidents (yourself, your team or with your client/manager). Try to solve them in this priority order afterwords and don't even try to think of phone calls. What if What if you can't turn off your phone and you can't tell your users they can't call you whenever they want? If you have your user's phone number I suggest you do the opposite: when they call you, make a notice and inform them you'll call them back when it's solved. Then call them back when they're sleeping. If they tell you that they're sleeping, remember their reply and use it when they call you in the middle of the night the next time. People usually understand their own language better. If they use office phone and you use a mobile phone so you can't call them outside working hours and they can, then start switching off your mobile phone after you leave the office. You've been there for 12 hours and you deserve to be off work. If the mobile phone is your personal one, then your company should get you a new one and you should inform your users/clients about it. If they start calling you on your personal one afterwards (because they can't reach you on your business one you either: don't pick up have it answered by a friend of yours informing them of the wrong number or that the original user of this number isn't using it any more. The most important thing Don't develop any new functionality until you resolve existing issues. At least high and medium priority ones. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/64818",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/12791/"
]
} |
64,822 | I feel like a reasonably qualified programmer, but a lot of job postings I run into make me feel otherwise. Almost all of them separate qualifications into requirements and desirables, but even the requirements part can be daunting. I've seen a lot of postings that say they require several years (2 or more) experience in a relatively small technology or library, something specific to their company. Other times I see 5 or even 7+ years experience required for a language. On their own some of these would be ok, but it gets ridiculous when a small town company says you need 3 years in 2 languages, proficiency in network programming, scripting, databases, and stuff like "experience with large highly redundant business critical systems" all at the same time. Do they really expect to find someone who has extensive experience working with exactly the same technology set they use? I have a hard time finding a single posting where I don't have at least 1 or 2 holes in my skill set. I've heard over and over that most places value your ability to learn quickly and will teach you on the job, but then why say it's required? Are they just trying to discourage the bottom of the barrel (FizzBuzz failures) from applying? | Yeah, they definitely do. However I usually go by the 75% rule, which is If I feel I know at least 75% of the requirements, then i'll go ahead and apply. Everything else they can just train me on. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/64822",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/7974/"
]
} |
64,926 | Say you are designing a Square root method sqrt. Do you prefer to validate the parameter passed is not a negative number or do you leave it up to the caller to make sure the param passed is valid. How does your answer vary if the method/API is for 3rd party consumption or if it is only going to be used for the particular application you are working on I have been of the opinion that a method should validate its parameter, however, Pragmatic Programmer in its Design by Contract section (chapter 4) says it's the caller responsibility to pass good data (pg 111 and 115) and suggests using Assertions in the method to verify the same. I want to know what others feel about this. | In general, I design my APIs as follows: 1. Document the methods well, and encourage the callers to pass good/valid data. 2. Validate the parameters anyway! - throwing exceptions when the preconditions are not met. I would say that parameter validation is necessary on most public-facing APIs.
Parameter validation on non-public methods is not as important - it is often desirable to have validation occur only once, at the public 'entry point' - but if you can live with the potential performance hit, I like to validate parameters everywhere, as it makes code maintenance and refactoring a bit easier. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/64926",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/3755/"
]
} |
65,081 | How did you convince your manager to let you unit test? By "use", I mean being allowed to develop, check-in to source control and maintain the unit tests over time, etc. Typical management objections are: The customer didn't pay for unit tests The project does not allow time for unit testing Technical debt? What technical debt? Do you know other objections? What were your answers? Thanks in advance! | I ran into this problem recently when a customer was on board with our methodology, but higher management got wind that the developers were spending their time testing rather than developing and were concerned about this - after all, they had QA people to do the testing! This is how I dealt with the situation (read my old blog post for more details). I compared our estimated hours against actual hours for the project and then compared our defect rate against other teams' defect rate. In our case these numbers compared favourably and there were no more concerns. My conclusion based on this experience is: ...the best way to convince anyone that your approach to doing something is practical and pragmatic, is to do it and measure it against other approaches. People donβt care about dogma, or why you think something should be the best way. Only by showing people the numbers and measuring the effectiveness of your approach can you truly show that your practices are effective. On other projects, we've worked alongside customer developers who didn't create unit tests or do TDD and we've had to maintain tests that they break. However, it becomes very easy to sell the TDD approach to those customer developers when you can tell them what they've broken in the code before they know! So in your case, I would do it by stealth if necessary (perhaps there is a small area of the code that you can start to test that changes often or that you are responsible for), but keep track of your numbers - what is the effort for creating your tests? What is the defect rate? How does this compare with other projects / team members? In my opinion, no-one should need to ask permission or apologize for wanting to do their job properly and any professional developer should be attempting to test their code with automated tests wherever it's possible and practical. Hopefully it's both of these things in your case. Good luck! | {
"source": [
"https://softwareengineering.stackexchange.com/questions/65081",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/21183/"
]
} |
65,179 | In the last few months, the mantra "favor composition over inheritance" seems to have sprung up out of nowhere and become almost some sort of meme within the programming community. And every time I see it, I'm a little bit mystified. It's like someone said "favor drills over hammers." In my experience, composition and inheritance are two different tools with different use cases, and treating them as if they were interchangeable and one was inherently superior to the other makes no sense. Also, I never see a real explanation for why inheritance is bad and composition is good, which just makes me more suspicious. Is it supposed to just be accepted on faith? Liskov substitution and polymorphism have well-known, clear-cut benefits, and IMO comprise the entire point of using object-oriented programming, and no one ever explains why they should be discarded in favor of composition. Does anyone know where this concept comes from, and what the rationale behind it is? | Though I think I've heard composition-vs-inheritance discussions long before GoF, I can't put my finger on a specific source. Might have been Booch anyway. <rant> Ah but like many mantras, this one has degenerated along typical lines: it is introduced with a detailed explanation and argument by a well-respected source who coins the catch-phrase as a reminder of the original complex discussion it is shared with a knowing part-of-the-club wink by a few in-the-know for a while, generally when commenting on n00b mistakes soon it is repeated mindlessly by thousands upon thousands who never read the explanation, but love using it as an excuse not to think , and as a cheap and easy way to feel superior to others eventually, no amount of reasonable debunking can stem the "meme" tide - and the paradigm degenerates into religion and dogma. The meme, originally intended to lead n00bs to enlightenment, is now used as a club to bludgeon them unconscious. Composition and inheritance are very different things, and should not be confused with each other. While it is true that composition can be used to simulate inheritance with a lot of extra work , this does not make inheritance a second-class citizen, nor does it make composition the favorite son. The fact that many n00bs try to use inheritance as a shortcut does not invalidate the mechanism, and almost all n00bs learn from the mistake and thereby improve. Please THINK about your designs, and stop spouting slogans. </rant> | {
"source": [
"https://softwareengineering.stackexchange.com/questions/65179",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/935/"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.