source_id
int64 1
4.64M
| question
stringlengths 0
28.4k
| response
stringlengths 0
28.8k
| metadata
dict |
---|---|---|---|
81,905 | Some companies selling software or libraries simply put their licensing model on their web page and be done with it. (Many also contain a disclaimer that there are volume discounts and special arrangements possible.) Other companies (most notably those placing themselves in the "enterprise" market) don't disclose their fees, some don't even disclose their licensing model! When you are interested in their product and contact sales, you then discover that their price calculation is simply a per site / per developer / per core / per whatever thing and they could just as well have put that info on their webpage. Can fellow programmers give me an insight as to why I have to exchange 3 emails and 2 phone calls with a sales representative to find out if a product even remotely fits into our development price tag? I have found that me (and some of my co-workers) are extremely reluctant to even evaluate a product that doesn't disclose it's costs up front, so I cannot understand why anyone would do that for "simple" products - that is for libraries and software that really doesn't have negotiated license fees. Note again: I'm talking about single-installation software or libraries that are licensed per-site or per-developer and not about stuff that requires complicated licensing agreements. | It's quite simple. It initiates contact between a potential customer and sales. They can then vary their price as they please based on any criteria you care to mention, without affecting the price expectations of other potential future clients. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/81905",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/6559/"
]
} |
81,929 | After graduating from college I was hired as a junior programmer a little over a year ago. I quickly noticed that I was degrees of magnitude faster than all the other programmers; this seems to be because I simply don't waste time "in general". The majority of other people however seem to enjoy staring at the ceiling, browsing YouTube, Facebook, and random websites, and in general doing in a day the work I usually do in an hour. I'm 100% sure they would be able to do that work in an hour too if they focused. I've been quickly promoted to senior developer and more recently to team leader and now I replaced a lot of those people with new hires (still a couple to go). The situation is now more acceptable, but still I think it could be much better. I can't help but notice though, that everyone seems to behave like this is "normal". All my bosses aren't concerned about this and they too seem to work little to nothing. I always have a very hard time finding them, they arrive much later than they are supposed to and leave early. Obviously there is nothing I can do in this case since they're above me, but is this is the "norm" in all companies, or did I simply end up in a very bad one (this is my first work experience)? Also, will I "become like them" in a few years? | How do you deduce they are not working? As a junior I typed all day, hacking away at my code, with just 20 minutes for lunch.
The more "senior" I got, the less time I spent typing and the more time I spent thinking. If I "stare at the ceiling" and my producer walks into the room,
she starts to smile, because she knows in half an hour I will have solved a problem that the "juniors" have been trying and failing at for the last few weeks. As a developer I don't get paid to type I don't get paid to write code I do get paid to solve problems. And solving problems works far better if I think before I do . Over the last few years I have seen this tendency to just hack down the first thing that comes to mind and then tweak and debug it until it seems to be doing what you want. (Usually ignoring all the corner cases until they hit you later.) I still remember the mainframe days, where you wrote your code, submitted it and waited for an hour or two until you got the first output. Guess what, you just didn't forget a semicolon or a bracket back then. Do not judge until you have the experience to do so. Please come back in five years and add a comment about what you learned. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/81929",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/27112/"
]
} |
81,947 | Why is jQuery released under MIT and not LGPL? What are the benefits of using MIT compared to LGPL for a framework? | LGPL is "infectuous", which means if you use it, you risk having to (L)GPL your own work too. GPL (and, depending on the circumstances, LGPL as well) practically excludes usage in a closed-source project. The question should really be worded the other way around: Why is product X licensed under (L)GPL rather than MIT / Apache / BSD / Mozilla? The latter are, IMO, much closer to the spirit of "free" software than the GPL family (after all, you can't force freedom on people). | {
"source": [
"https://softwareengineering.stackexchange.com/questions/81947",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/19513/"
]
} |
82,114 | I was going through some tutorials on TortoiseHg. Despite having a rich GUI, the first examples are given using command line options.
Does the ideal usage involve command line or it was started that way so one has idea of what is happening under the hood and GUI internally uses this command anyway. | Use what you want. Use what makes sense for what you are doing now. While I'm mostly a command line guy (I use the GUI only to get a graphical representation of revision graph), I've trouble to understand why you think using the command line gives you a better understanding of what happens behind the hood (well excepted for GIT :)), usually there is a clear mapping between the two even if CL may give you access to some little used features. Now in my opinion, automating what can be automated in the process is part of the job and for that, CL is mandatory. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/82114",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/1560/"
]
} |
82,159 | Possible Duplicates: Prototyping vs. Clean Code at the early stages Frankly, do you prefer Cowboy coding? After working in a number of companies, I am starting to realize that my commitment to writing high quality, well tested software, does not necessarily lead to career advancement, especially when managers with non-technical backgrounds do not see the benefits of maintainable code, and prefer developers who have a semblance of high productivity, while racking up mountains of technical debt. That said I tend to stick to well written code to because I find it intrinsically rewarding - that feeling you get when you know you have done a job well, even though you know you won't get credited for it. In part I can close the productivity gap by adopting different technologies and approaches while still writing clean code - solutions include dynamic languages, polyglot programming, and convention over configuration. But my dilemma remains - does craftsmanship pay off? | Here are some numbers to think about : In 2000, research found that up to 90% of the total cost of software systems was spent on maintenance and evolution. Overall, research has found that at least 50% of the cost of a software system is in maintenance. In the US alone, annual maintenance costs are approaching $70 billion. By spending the time to follow good engineering principles from the inception of the project through the time the system reaches end-of-life, these numbers can be cut down, which is good for your organization's bottom line. There are many facets here, ranging from producing effective technical documentation for future developers to shipping well-written code and tests. You might spent a little extra time making it better now, but it will pay off in maintenance phases down the road. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/82159",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/108352/"
]
} |
82,321 | According to this question and it's replies What is the purpose of the "non-endorsement clause" in the New BSD license? it seems smarter to pick the BSD license over the MIT, to prevent people using your name in an unwanted way. If that is the case, why do people still pick the MIT license over BSD license? | A friend of mine once pointed out that licenses tell you what the license authors were scared of. If you're scared of having your name dragged through the mud, then the BSD license will seem better. If you're scared of having your software put into a proprietary piece of software, then the GPL will seem better. Whatever the license, the author chooses it because it protects them against what they are afraid of. Different people have different concerns and so use different licenses. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/82321",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/19513/"
]
} |
82,323 | A company that I work with has asked me to do candidate phone screenings to make sure they aren't completely embarrassed when sending over someone to a potential client. It turns out that a fair number of people were being placed for a C++ developer role. I don't spend much time in C++, but I have done a few trivial and non-trivial projects in the language. I figured that "Explain the RAII idiom" would be a nice softball question that most serious C++ developers could answer while half asleep, and would allow me to move on to more interesting questions about experience. But it turns out people who have 10+ years C++ experience don't recognize the term, even if I expand the acronym to "Resource Acquisition is Initialization." One candidate went so far as to say he felt it wasn't always practical to apply the technique in Windows development, which I thought was an odd sentiment, but I could see a couple of examples that arguably support that line of thought. (But really only arguably ). Even a couple of C++ developers I know well enough to judge their competence said they didn't recognize the term, but upon reading a summary of the technique, said "Oh, yeah, I didn't know that had a name. I just thought of those things as something you just have to do ." I remember the term from the second edition of Stroustrup's book, even though the full impact didn't sink in at the time. So, is "Can you explain the RAII idiom to me?" a fair screening question? Is it reasonable to expect all competent C++ developers to understand it? Is the term more esoteric than I would think? Assuming a candidate doesn't know the term, are there follow up questions that could help me tease out whether they have at least internalized the practices that make RAII work? Are there better alternative "weeder" questions that give the candidate some latitude in answering, and help the candidate demonstrate their understanding of C++ development? Editing to add : To clarify, I'm not the sort of interviewer that disqualifies people because they don't know buzzwords and acronyms. However, I do think it's reasonable to expect that an experienced C++ programmer has internalized good practices for resource management. I also think that it's important to verify that a candidate understands some "basics" about the technology they claim expertise in before moving on to more interesting questions about design, problem solving, etc. I think what I'm looking for is a good way to ask an open-ended question, suitable for use in a short telephone screening, that I can use to judge a candidate's basic understanding of good resource management practices in C++, before I ask "hard" questions. | You seem to have found that C++ developers you know from experience are competent are unfamiliar with that acronym or even the full expression. That alone would seem to indicate that the question is not suitable as a screening question during a phone call. On the other hand, you could get to the same point via a more roundabout way by posing a scenario. Something like: "You are implementing a Log class that will write logging information to a file. Obviously, you will need to have a member variable that is a file handle (std::FILE*). Where do you allocate and free this file handle?" If the candidate starts talking about creating an open() and a close() method rather than allocating the file handle in the constructor and deallocating the handle in the destructor, you can follow up by asking about things like how their class would behave if the calling code raised exceptions, etc. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/82323",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/7080/"
]
} |
82,377 | Should properties in C# have side effects beside notifying of a change in it's states? I have seen properties used in several different ways. From properties that will load the value the first time they are accessed to properties that have massive side effects like causing a redirect to a different page. | I assume you're talking about read-only properties, or at least property getters , since a property setter is, in almost every instance, going to have side-effects. Otherwise it's not very useful. In general, a good design follows the principle of least surprise . Don't do things that callers aren't expecting you to do, especially if those things might change future outcomes. In general , that means that property getters should not have side effects. However , let's be careful about what we mean by "side effect". A side effect is, technically, any modification of state. That might be publicly-accessible state, or... it might be totally private state. Lazy/deferred loaders are one example of state that is almost exclusively private. As long as it's not the caller's responsibility to free that resource, then you are actually reducing the surprise and the complexity in general by using deferred initialization. A caller does not normally expect to have to explicitly signal the initialization of an internal structure. So, lazy initialization does not violate the above principle. Another example is a synchronized property. In order for a method to be thread-safe, it will often have to be protected by critical section or mutex. Entering a critical section or acquiring a mutex is a side effect; you are modifying state, usually global state. However, this side effect is necessary in order to prevent a much worse kind of surprise - the surprise of data being modified (or worse, partially modified) by another thread. So I would loosen the restriction a bit to the following: Property reads should not have visible side effects or side effects which change their semantics . If there is no possible way for a caller to ever be affected by, or even aware of a side effect, then that side effect is not doing any harm. If it would be impossible for you to write a test to verify the existence of a particular side-effect, then it's localized enough to label as a private implementation detail, and thus of no legitimate concern to the outside world. But do be careful; as a rule of thumb, you should try to avoid side effects, because often what you may think to be a private implementation detail can unexpectedly leak and become public. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/82377",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/14392/"
]
} |
82,432 | I am completely clueless about the inner workings of an operating system, but I can more or less guess the approximate behaviour of many functions. One thing that I am not able to figure out, though, is multitasking. In theory, the operating system manages time, according the CPU for small intervals to the various programs running. But it is not clear how this really works. Say the operating system wants to start my program. The machine code is loaded somewhere in RAM, starting at a certain address. I guess then a jump should be performed to that address, allowing my code to execute. But in this way, the OS cannot regain control until I jump back. Basically, I can imagine just two ways of making this work, but neither seems really suitable: The operating system could read the machine instructions I want to perform and emulate them instead of executing them directly. I am intentionally vague, since I do not know how this would work, but it seems like it would slow down the program considerably. Alternatively, the operating system could wait until I make a system call. In that moment it regains control and can check how long I have been running and do its timesharing stuff. This may work, but it seems unreliable, as I could make a long calculation which does not involve system calls and hang everything for a while. So, it seems neither mechanism would work very well. How is multitasking actually performed? | The OS programs a timer to kick in every few microseconds (or milliseconds, depending on system speed). This timer raises the hardware interrupt, which causes the CPU to stop whatever it is currently doing, dump all its contents onto the stack and process the interrupt routine indicated by the address provided by the interrupt controller. This routine can inspect the stack and various other variables to make a decision which running process should next be put back into action. If it's the same process, the interrupt routine simply returns. If it's a different one, the relevant parts of the stack are saved and then replaced with the contents of a previously interrupted process, so when the interrupt routine returns, that process continues. Other than the fact that some time has elapsed, that process is unaware of having been interrupted and/or paused for some time. This is (for modern CPUs) a VERY VERY simplified version of what happens, but it explains the principle. In addition to these OS controlled interrupts, there are also interrupts caused by external events (mouse, keyboard, serial ports, network ports, etc.) which are process with separate interrupt routines, which are usually connected to event handlers. Very often process/task/context switching is also based on availability of external resources. Typically a process that requires data from storage (i.e. not in RAM) will place the request on a queue, set an event handler for the hardware interrupt indicating that the request has been served and then relinquish control to the task scheduler (since there is no point in waiting). Again, a very simplified description of what actually goes on, but it should serve the purposes of this answer. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/82432",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/15072/"
]
} |
82,435 | There are so many programmers out there who are also an expert at Query writing and Database design. Should this be a core requirement to be an expert programmer or software engineer? Though there are lots of similarities in the way queries and codes are developed, my personal opinion is, Queries seem to have a different Structure than Code and it can be tough to Master both simultaneously due to the different approaches. | Whether or not database query writing should be a core requirement depends on the job, but relational databases are ubiquitous in current technology. So, if I met a programmer that didn't know how to write database queries, I would expect one of two things: They are generally inexperienced. They are highly specialized in another field (e.g. embedded systems) and have never needed to learn it. Database queries are fundamentally different from more standard programming languages. They are algebraic and intended to operate on relational data, while C# or Java are imperative and operate on disks, memory, user input, etc. Even functional languages like LISP or Haskell that are more algebraic in form are less oriented to relational data. EDIT: As has been pointed out in the comments by me and others, there are some valid reasons why an experienced developer may not know database queries well: Their team used ORM/NoSQL Their team had DB programmers The complexity of the application was in the business logic, and the DB queries were trivial Their team apportioned the work such that some programmers didn't write queries Though valid, these caveats are not convincing reasons why an experienced developer would not know database queries. Unless highly specialized, a programmer should be familiar with relational databases. In summary, most experienced developers should know database queries . | {
"source": [
"https://softwareengineering.stackexchange.com/questions/82435",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/1560/"
]
} |
82,632 | One of my roles in my team is the build person . I am responsible for maintaining/updating our build scripts and making sure we are building 'smoothly' on the continuous integration server. I usually do not mind this job, though often it feels like I am constantly babysitting the CI server. This job can be annoying at times because if the build breaks I have to drop the story I am working on and investigate the build failure. Build failures happen daily on our team. Sometimes developers simply do not build locally before committing so tests fail on the CI server. In this situation I like to quickly get to the person who had the 'bad commit' so the build does not stay broken too long. Sometimes (a lot less frequently) a strange condition exists on the CI server that needs to be debugged. I know that many mature teams use Continuous Integration but there is not a lot of material out there about good practices. Do my problems point out that our continuous integration is not very mature or is this simply part of the job? What are some good practices to follow? What are the characteristics of mature continuous integration ? Update Instead of answering some comments I am going to make an update instead. We have a single, simple command that does exactly what the build server will do when building the app. It will compile, run all unit/integration and some quick UI based tests. Reading everyone's answers, it feels we might have two major problems. CI Server not complaining loud enough when a build fails. Developers do not feel like its everyone's responsibility to make sure their commit goes through successfully. What makes things harder in my team is that we have a large team (10+ developers) AND we have a couple of off-shore team members committing when we are not even at work. Because the team is large and we established that frequent, small commits are preferred, we sometimes have really a lot of activity in a day. | First and foremost: each person is responsible for the build process . It sounds like members in your team are not mature... No one gets away with writing code and fobbing it off to the CI server hoping that it works. Before committing code, it should be tested on their local machine. You should be sure that the code you're checking in isn't going to break the build. Of course, there are cases when the build break unintentionally (e.g. if a config file has been changed or sloppy commit was inadvertently made). Most CI servers (I've only used Hudson) will send an automated email detailing the commits made that caused the build to break. The only part of your role is to stand behind them looking tough until the suspect fixes whatever they broke. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/82632",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/16992/"
]
} |
82,648 | I am very eager to study best practices when it comes to space hardening. For instance, I've read (though I can't find the article any longer) that some core parts of the Mars rovers did not use dynamic memory allocation, in fact it was forbidden. I've also read that old fashioned core memory may be preferable in space. I was looking at some of the projects associated with the Google Lunar Challenge and wondering what it would feel like to get code on the moon, or even just into space. I know that space hardened boards offer some sanity in such a harsh environment, however I'm wondering (as a C programmer) how I would need to adjust my thinking and code if I was writing something that would run in space? I think the next few years might show more growth in private space companies, I'd really like to at least be somewhat knowledgeable regarding best practices. What happens to a program if radiation, cold or heat bombards a board that sustained damage to its insulation? I think the goal is keeping humans inside of a space craft (as far as fixing or swapping stuff) and avoiding missions to fix things. Furthermore, if the board maintains some critical system, early warnings seem paramount. How does one gain experience in this through testing and trial & error (barring the launch of your own personal satellite?) | Space software is not arcane magic. You are still using 0's and 1's, not 1's and 3's. So there’s probably no wow factor involved in describing what goes into developing software. Some slight differences that come to mind at the moment are: Extremely process oriented. Space software will always have both software and hardware watchdog timers. Every space system I’ve worked on was a hard real-time system. You simulate (to great accuracy) every external actor to the system. This usually involves building (sometimes really expensive) custom hardware that is used solely for testing. You spend enormous effort and expense doing formal testing. The customer (usually JPL) is extremely involved in the test process. You generally are using old and known compilers and development environments, rather than the new ones. Code reviews, code reviews and code reviews. You better be very comfortable switching between the hardware and software worlds. You don’t have to know how to design the hardware but you have to know how it works. Extensive use of test equipment, like oscilloscopes, logic analyzers, synthesizers and spectrum analyzers. At least 3 locations for storing the application program. The default is burned in ROM. This will never change. The other 2 are for the current version and the next/last version. Failure analysis (MTBF) is really important. Redundant systems and failover plans for the critical components. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/82648",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/131/"
]
} |
82,682 | Note on the question: this is not a duplicate, Efficient try / catch block usage? was asked after this one. The other question is the duplicate. I was wondering what was the best way to use try/catch. Is it better to limit the content of the try block to the minimum or to put everything in it? Let me explain myself with an example: Code 1: try {
thisThrowsAnException();
}
catch (Exception e) {
e.printStackTrace();
}
thisDoesnt(); Code 2: try {
thisThrowsAnException();
thisDoesnt();
}
catch (Exception e) {
e.printStackTrace();
} Assuming that thisThrowsAnException() ... well... can throw an exception and thisDoesnt() ... I'm sure you got it. I know the difference between the two examples: in case the exception is caught, thisDoesnt() will be called in the first case, not in the second. As it doesn't matter for me, because the throwing of that exception would mean the application should stop, why would one use a version better than the other? | In my opinion you should put everything in the block that is dependent on the part that throws the exception. So if in your second example: try {
thisThrowsAnException();
thisDoesnt();
}
catch (Exception e) {
e.printStackTrace();
} If thisDoesnt(); is dependent of a successful execution of thisThrowsAnException() it should be included. Does it make sense to run it if thisThrowsAnException() fails? | {
"source": [
"https://softwareengineering.stackexchange.com/questions/82682",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/26656/"
]
} |
82,732 | Some Scrum software management tools give you this option to explicitly name your sprints. Do you have a preferred way of naming your sprints or do you just use a simple scheme like 1, 2, 3, ...? | Ask the team . If they think it's fun or useful to name the sprint, choose one together. Since every sprint should have a goal , it shouldn't be a problem to find a suitable name. Naming the sprint could actually help the team focusing on the main objective. I would personally love that kind of thing. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/82732",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/6866/"
]
} |
82,779 | I'm just getting started in Android development and am working on a few small "practice" apps. As an example, one is a live wallpaper. The others are similar in terms of development effort. When these apps are done I would like to list them in the market. I may list them for free (or maybe not), and they're simple apps, but if things go smoothly I may try to build and sell larger and more ambitious projects in the future. My question is is it wise to sell apps in the Android Market under my own name? Looking around in the market, it seems almost everyone is selling under a company name, even if they're only a 1-2 man shop. I can definitely see the advantage of creating a company to sell my apps if they were big apps involving a team of people. But it seems like overkill to create a company, even if just in name and identity, just to list something like a live wallpaper. On the other hand, I don't want to expose myself to legal liability or other potential problems that I haven't foreseen. What's the best course of action here? For purposes of legal jurisdiction, I am in the United States. I understand that you are not my lawyer and answers are not legal advice. Edit for clarification: Just to be clear, what I want to ask is less about the advantages of forming a company, and more about potential dangers of not forming one. In other words, are there likely to be situations where I'm beating myself up and saying "Oh, if only I had formed a company instead of listing this is as a personal project!". | Selling as a company gives your protection for your private assets from your disgruntled customers, competitors and anyone else who wants to try you out Selling under a company name helps develop a brand. Real name is boring. Come up with something creative and make the name known. Depending on your particular real name and its origin some people may have certain negative associations or ideas and decide they just don't like your name to engage in any sort of a dealing with you (might not give you as little as $0.99 or even take your app for free). A company name protects you from phobic people. Having a brand will help you in gaining recognition for your new apps. If you throw in another "Advanced Notes" app it has good chances of going unnoticed. If you name it "CoolComp Notes" it will ignite an interest if your name is associated with great apps. A brand will help you in naming your apps easily and consistently. Instead of struggling to invent yet another name for your next app like "Super Notes", "Personal Finances Plus" and so on you could simple make them "CoolComp Notes", "CoolComp Finances". Much easier and creates a portfolio. A brand is easier to sell later and at a higher price than a collection of random apps from some guy named "Joshua Carmody". | {
"source": [
"https://softwareengineering.stackexchange.com/questions/82779",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/19700/"
]
} |
82,792 | I read the following question: Tips for a first year CS student looking for a summer internship to gain experience? But rather than how to get and/or look for internships, my questions concerns of how to filter companies looking for free labor to do their website VS companies that might not offer a paid check, but it will offer you mentorship and skills. I have a cousin that is a college freshman and she is looking for Software development internships, but at this point, she is desperately looking for any internship that is IT related. Besides the big software names and elite small software shops, how do you recognize a good internships in non-IT-companies (large or small) doing IT maintenance vs the bad ones? If you don't have work experience, what questions a college student should ask to prevent ripoff? Are there any red flags that you can spot before accepting an offer? | Fair warning - in my opinion, her biggest detriment is years of college experience. While it's quite true that you can be a great programmer and be a college freshman - big companies with established internship programs that offer the kind of things you are speaking of typically have a set of hardened criteria for reviewing applicants. One of them is years of college experience and most companies don't readily accept interns that are freshman - the earliest is usually after Sophomore year. This is partially because interns - even if they are free - are an investment. They need computers, space to work, and the time of a senior person to supervise them. The cost of this has to be weighed against the value the intern will bring to the team. IMO - most interns are a good deal, but you can't take everyone so a company has to find some set of criteria. So, knowing your cousin is a freshman tells me that if she wants a technical internship, she's already going to have to get creative - it's time to start using every under-the-wire resource and personal connection. You'll want to maximize your contact with people who will see working with her as a plus - and that's a heck of a lot easier with people who know your cousin. The other venue that is ALWAYS worth maximizing is her college's internship system. Most colleges have internal internship job boards where the college and the internship providers have worked together to build a set of opportunities for students. In these cases, the companies realize that they need to give interns a good opportunity. I'm assuming that at this point, though, your cousin has tried everything and she's really scrounging - going to just about every job board and trying just about every opportunity. Assuming she gets an interview, she'll want to remember that interviewing is as much about checking out the company as it is about showing potential. Here's some things I'd look for when going on an internship interview (now that I'm well past my intern years and I know from the other side exactly what a manager can do with an intern!). I'm keeping it broad as IT is a heck of a lot more than just software work! What will I be working on? - good signs are work that involve either making things work or verifying that they are working. Bad signs are doing paperwork while other people make things work. Having to service customer problems (front line tech support) isn't all bad, it may all she can snag, but if that's the case, she should be getting paid. Who will I be working for? - preferably it's someone on the interview list or someone on the interview list is "sample manager" - ie a manager who needs interns, but the company may assign the candidate to a similar but different manager in the end - it's just that managers have to handle pools of interviews and often matchmaking is done later on. Ask this manager what they expect from an intern and what they are willing to give. Good signs are "I expect you to take responsibility", "I expect you to ask lots of questions", "I expect you'll keep me appraised of your status", "I expect that by the end of the internship you'll be able to take on a 3-4 week chunk of work and work with minimal assistance". Bad signs are - "you'll follow me around and do exactly what I tell you", or "I won't have time to give much guidance, and I don't have anyone that can help, we'll have to figure it out as we go". How big is the technical team? - hopefully it's a team of 5-10 people with varying degrees of expertise, with at least a few senior people. Also - how much time will the senior people have to spare to answer questions? Good = 15 minutes a day, with a known rampup of longer than that early on. Bad = "what senior people?" or "it's just you". What are the major goals of the team? What are the key ingredients to success on this project? - this is my particular way (when I do volunteer work) of figuring out whether I've found people I can work with. If it takes 20 minutes to answer this question, we have a failure. Even if I am working alone, with a non-technical person, I may learn a lot of good stuff. But that requires that the people I'm working with have a real clue of what they want to get done. It's amazing how many people really don't know what the problem they are trying to solve is, or what qualifies as a solution. Without that core vision, no matter how technical the team is, it's likely that the internship will be frustrating. All that said - seriously - don't feel awful if an internship doesn't happen in the summer of Freshman year. I had pretty great grades, 3 languages under my belt and a programmer mother and I STILL didn't swing an internship my Freshman summer. I used my touch typing skills to qualify as a Temp and worked answering phones, typing and filing microfiche (boy am I old!) my Freshman summer... at least I got to hang out in A.C.! | {
"source": [
"https://softwareengineering.stackexchange.com/questions/82792",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/24962/"
]
} |
82,896 | I've recently encountered this using TortoiseSVN, but I assume it will be the same for CVS based programs (correct me?). Out of pure curiosity, is there any reason why the CVS filesystem is case-sensitive? I.e. the following URLS are different: svn://repo/branches/PROJECT
svn://repo/branches/project Is there some legacy reason for this? It gets more intriguing on a file basis. If 2 files exist in a directory, lets say ProjectOne.vbp and projectone.vbp , one will overwrite the other in a normal Windows filesystem (or, as I have encountered, throw a cryptic TortoiseSVN database error), but can co-exist peacefully in the repository. Now obviously it's up to the user to not use ridiculous naming such as the above, but are there any advantages that I'm missing to having case-sensitivity? | SVN works on both case sensitive and case insensitive filesystems, so it must assume the most compatible option, which is case sensitivity. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/82896",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/23244/"
]
} |
82,973 | Lots of people in the open source community say they strongly consider a candidate's Github profile when hiring. I'm active on Github, with a few projects of my own and some contributions to others. But looking at my own profile as if I were an employer, I see a lot of noise: projects I cloned but never contributed to, etc. The projects and patches I'm proud of don't stand out. If you assess people's Github profiles, how do you do it? And as a developer, should I do anything differently - for example, delete cloned repos I'm not actively working on? | I've used GitHub profiles, twitter streams, and blogs all as indicators of quality in programming interviews/candidate screening. They all generate different signals in their own way. 9 out of 10 applicants have never submitted a single patch to a single open source project. Even updating broken documentation puts you into an upper echelon of developer. It shows that you are familiar enough with some open source package to know what's wrong, you care enough to submit a patch, and the maintainers of that package think your work is good enough to be included. As a generalization, it shows that you take the initiative to leave dirty things better than you found them. It sounds really simple, but again 9 out of 10 developers never bother to take this all important step. So a single accepted patch looks great. A long history of 2-3 simple patches per quarter is even better. Even better than that would be to contribute something of note. Substantial contributions to important Open Source projects(upper 0.1%-1% of candidates) Extended history of small contributions to any projects (upper 5% of candidates) A single one-liner patch to a relatively unknown package (upper 10% of candidates) On the same note, developers that tweet about drinking and going to see movies all the time tend to make mediocre hires. A tweet stream where every 3rd message is about technology points towards the kind of rabid junkyard dog developer that cares about his craft and relentlessly pursues solutions. Blogging is also a great indicator of quality, but for communication style rather than technical prowess. How many programmers bother to write blog article #1? The same kind of 1%/5%/10% cutoffs apply here. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/82973",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/7217/"
]
} |
83,009 | I have the impression that Delphi isn't very popular anymore. But now at work I had to make some changes to an old Delphi program that we are still using. I used Borland Developer Studio 2006 and it was very pleasant and intuitive to work with, even though I had practically no previous exposure to it. Is Delphi still widely-used and I am simply not aware of it or are there other reasons for its decline? | Delphi is still around and very much alive, but under new ownership. Borland really lost their way. They had a really large product line, and the main thing that people were interested in was Delphi, but what the PHBs thought was going to be big was not their development tools, but their Application Lifecycle Management tools. So they put a lot of resources into developing and promoting that instead of Delphi, and let the development tools branch languish. I even heard from some former Borland employees at the Delphi Live! conference a few years ago that their sales people were actively discouraged and dis-incentivized (is that a word?) from trying to sell Delphi at all, even to potential clients who expressed interest up-front. A few years back, things changed. Borland sold their entire development tools division to Embarcadero Technologies, which up to that point was mostly known for database-related software. Now their big claim to fame is that they're the guys who make Delphi. Within a few months of the sale, Borland stock fell below $1/share and they were bought out by a "corporate graveyard" company that basically does nothing but manage licensing fees on existing products. Borland no longer exists. Embarcadero, though, actually cares about Delphi. They've put a lot of work and effort into it, and the product quality has improved tremendously in the last few releases. Despite both the recession and Delphi being a commercial-only tool in a perceived "age of open-source development," sales have been really strong and the team's been able to make a lot of progress. TL;DR: Borland is dead; Delphi is not. It's "Embarcadero Delphi" now, and it's very much alive and kicking. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/83009",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/19060/"
]
} |
83,057 | I have come to use LINQ in my every day programming a lot. In fact, I rarely, if ever, use an explicit loop. I have, however, found that I don't use the SQL like syntax anymore. I just use the extension functions. So rather then saying: from x in y select datatransform where filter I use: x.Where(c => filter).Select(c => datatransform) Which style of LINQ do you prefer and what are others on your team are comfortable with? | I find it unfortunate that Microsoft's stance per MSDN documentation is that the query syntax is preferable, because I never use it, but I use LINQ method syntax all the time. I love being able to fire off one-liner queries to my heart's content. Compare: var products = from p in Products
where p.StockOnHand == 0
select p; To: var products = Products.Where(p => p.StockOnHand == 0); Quicker, fewer lines, and to my eyes looks cleaner. Query syntax doesn't support all of the standard LINQ operators either. An example query I recently did looked something like this: var itemInfo = InventoryItems
.Where(r => r.ItemInfo is GeneralMerchInfo)
.Select(r => r.ItemInfo)
.Cast<GeneralMerchInfo>()
.FirstOrDefault(r => r.Xref == xref); To my knowledge, to replicate this query using query syntax (to the extent possible) it would look like this: var itemInfo = (from r in InventoryItems
where r.ItemInfo is GeneralMerchInfo
select r.ItemInfo)
.Cast<GeneralMerchInfo>()
.FirstOrDefault(r => r.Xref == xref); Doesn't look more readable to me, and you'd need to know how to use method syntax anyway. Personally I'm really enamored of the declarative style LINQ makes possible and use it in any situation where it's at all possible - perhaps sometimes to my detriment. Case in point, with method syntax I can do something like this: // projects an InventoryItem collection with total stock on hand for each GSItem
inventoryItems = repository.GSItems
.Select(gsItem => new InventoryItem() {
GSItem = gsItem,
StockOnHand = repository.InventoryItems
.Where(inventoryItem => inventoryItem.GSItem.GSNumber == gsItem.GSNumber)
.Sum(r => r.StockOnHand)
}); I'd imagine the above code would be difficult to understand for someone coming into the project without good documentation, and if they don't have a solid background in LINQ they might not understand it anyway. Still, method syntax exposes some pretty powerful capability, to quickly (in terms of lines of code) project a query to get aggregate information about multiple collections that would otherwise take a lot of tedious foreach looping. In a case like this, method syntax is ultra compact for what you get out of it. Trying to do this with with query syntax might get unwieldy rather quickly. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/83057",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/14392/"
]
} |
83,060 | I am trying to write a joomla type system in PHP to improve my coding/programming skills i need guidelines/rules of thumb to do that. what i basically want to do is create a index.php file which will act as a front controller and will redirect to the request transparently to "extensions/plugins" (no need for them to follow MVC and hence it will be more flexible) what are your recommendations on that? EDIT i meant i am trying to create it using CORE PHP (no frameworks), but existing libraries are acceptable | I find it unfortunate that Microsoft's stance per MSDN documentation is that the query syntax is preferable, because I never use it, but I use LINQ method syntax all the time. I love being able to fire off one-liner queries to my heart's content. Compare: var products = from p in Products
where p.StockOnHand == 0
select p; To: var products = Products.Where(p => p.StockOnHand == 0); Quicker, fewer lines, and to my eyes looks cleaner. Query syntax doesn't support all of the standard LINQ operators either. An example query I recently did looked something like this: var itemInfo = InventoryItems
.Where(r => r.ItemInfo is GeneralMerchInfo)
.Select(r => r.ItemInfo)
.Cast<GeneralMerchInfo>()
.FirstOrDefault(r => r.Xref == xref); To my knowledge, to replicate this query using query syntax (to the extent possible) it would look like this: var itemInfo = (from r in InventoryItems
where r.ItemInfo is GeneralMerchInfo
select r.ItemInfo)
.Cast<GeneralMerchInfo>()
.FirstOrDefault(r => r.Xref == xref); Doesn't look more readable to me, and you'd need to know how to use method syntax anyway. Personally I'm really enamored of the declarative style LINQ makes possible and use it in any situation where it's at all possible - perhaps sometimes to my detriment. Case in point, with method syntax I can do something like this: // projects an InventoryItem collection with total stock on hand for each GSItem
inventoryItems = repository.GSItems
.Select(gsItem => new InventoryItem() {
GSItem = gsItem,
StockOnHand = repository.InventoryItems
.Where(inventoryItem => inventoryItem.GSItem.GSNumber == gsItem.GSNumber)
.Sum(r => r.StockOnHand)
}); I'd imagine the above code would be difficult to understand for someone coming into the project without good documentation, and if they don't have a solid background in LINQ they might not understand it anyway. Still, method syntax exposes some pretty powerful capability, to quickly (in terms of lines of code) project a query to get aggregate information about multiple collections that would otherwise take a lot of tedious foreach looping. In a case like this, method syntax is ultra compact for what you get out of it. Trying to do this with with query syntax might get unwieldy rather quickly. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/83060",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/27038/"
]
} |
83,117 | Very frequently, I am stuck when choosing the best design decision. Even for small details, such as function definitions, control flow, and variable names, I spend unusually long periods perusing the benefits and trade-offs of my choices. I feel like I am losing a lot of efficiency by spending my hours on insignificant details like these. Even though, I know in the back of my mind that I can change these things if my current design doesn't work out, I have trouble deciding firmly on one choice. What should I do to combat this problem? | Two simple rules: Do the simplest thing that could possibly work. Refactor continuously. As you begin to do each of these things, you will gain confidence that you can make simple decisions now without compromising your ability to respond to change later. Remember that future proofing means making code easy to change, not trying to anticipate every possible way your code might need to change. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/83117",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/27367/"
]
} |
83,225 | Many times I forget things about my application. I don't memorize the table names or what a query did and I search to get what I want. My team leader told me I'm supposed to memorize the table names that I use. Is the developer required to memorize the table names in the database, the classes names etc? And if the answer is "Yes, all the time," what should I do to remember those things? | You shouldn't need to explicitly memorise these things. By that I mean sit down and learn them as you would a list of words for a spelling test. In the first instance the names should be memorable and discoverable so you can find them again without too much effort. You should also have access to tools that help you out here with auto-completion and the like. In a large system of over 100 tables there's no way you can really be expected to remember every table name and every column name, however, with memorable, discoverable names and regular use you should find yourself remembering the most important details and those you use every day. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/83225",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/26930/"
]
} |
83,399 | HTML, CSS, and JavaScript can be used to build beautiful (and useful) UI's (especially now we have HTML5 and CSS3), and lots of people already know them. Though it's still way beyond my reach, how difficult can it be to bring the whole web app thing to desktop apps? We already test apps on our local servers before hosting them. In my opinion, it's a nice, simple idea which will create a boom in desktop apps. Plus, given that these apps will already be sharing so much code with web apps, they might be able to offer better connectivity. Why isn't it being done? | Adobe already did it with Adobe Air , and Mozilla too with Prism .
Google also tried to bridge the gap between desktop and web with Google Gears . But in general, web technologies are not suited for many types of desktop applications, here some reasons why: No immediately available full hardware access. No low level system access. No easily available filesystem access (the technologies I mentioned above allow you to get fs access but every one of them as its own different solution). Performance. A native, compiled application is generally faster than a Javascript application. Easy for a competitors to steal the source code No libraries available for specialized tasks. Ex. image-processing, sound encoding, database access, network programming etc... | {
"source": [
"https://softwareengineering.stackexchange.com/questions/83399",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/27609/"
]
} |
83,553 | Should we create a database structure with a minimum number of tables? Should it be designed in a way that everything stays in one place or is it okay to have more tables? Will it in anyway affect anything? I am asking this question because a friend of mine modified some database structure in mediaWiki. In the end, instead of 20 tables he was using only 8, and it took him 8 months to do that (it was his college assignment). EDIT I am concluding the answer as: size of the tables does NOT matter, until the case is exceptional; in which case the denormalization may help. Thanks to everyone for the answers. | IGNORE the number of tables. Worry more about getting the design correct. If your major concern is quantity of tables, you should probably not be designing database systems. If your friend only needed 8 tables, and the system works fine with that, then 8 is the correct number, and the remaining 12 might not have been necessary for whatever he was doing. Possible exceptions might be peculiar environments that have hard limits on table numbers, but I can't think of a concrete example of such a system off the top of my head. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/83553",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/27038/"
]
} |
83,780 | Go is one of the few languages that are supposed to run 'close to the metal', i. e. it's compiled, statically typed and executes code natively, without a VM. This should give it a speed advantage over Java, C# and the like. It seems, however, that it's behind Java (see the Programming Language Shootout ) I'm assuming that less mature compilers are hugely responsible for this, but are there any other reasons? Is there anything inherent in Go's design that would prevent it from running faster than, say, Java? I have a very unsophisticated view of runtime models, but it seems that at least in principle it should be able to run faster than Java, thanks to native code execution. | In terms of language design, there isn't really anything that should make Go slower than Java in general. In fact, it gives you more control of the memory layout of your data structures, so for a lot of common tasks it should be somewhat faster. However, the current primary Go compiler, scheduler, garbage collector, regexp library, and a lot of other things aren't particularly optimized. This is steadily improving, but the focus seems to be on being useful, simple, and fast enough over winning in microbenchmarks. In the linked benchmark, Go loses big to Java on the binary tree and the regexp test. Those are tests of the memory management system and regexp library respectively. Go's memory management could be faster and will certainly improve over time, and the current standard regexp library is a placeholder for a much better implementation that is soon to come. So, losing on those two isn't surprising, and in the near future the margin should be more narrow. For the k-nucleotide benchmark, it's somewhat hard to compare because the Java code looks to be using a different algorithm. The Go code will certainly benefit from compiler, scheduler, and allocator improvements to come, even as written, but someone would have to rewrite the Go code to do something more clever if we wanted to compare more accurately. Java wins in the mandelbrot benchmark because it's all floating point arithmetic and loops, and this is a great place for the JVM to generate really good machine code and hoist things at runtime. Go, in comparison, has a pretty simple compiler that doesn't hoist, unroll, or generate really tight machine code currently, so it's not surprising it loses. However, one should keep in mind that the Java timing doesn't count the JVM start-up time or the many times it needs to be run for the JVM to JIT it nicely. For long-running programs, this isn't relevant, but it matters in some cases. As for the rest of the benchmarks, Java and Go are basically neck-in-neck, with Go taking significantly less memory and in most cases less code.
So, while Go is slower than Java in a number of those tests, Java is pretty fast, Go does pretty well in comparison, and Go will probably get notably faster in the near future. I'm looking forward to when gccgo (a Go compiler that uses the gcc codegen) is mature; that should make Go pretty much in line with C for many types of code, which will be interesting. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/83780",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/27215/"
]
} |
83,797 | I was under the impression that a version control system eliminated the need to have "change logs" plastered everywhere in the code. I've often seen the continued use of change logs, including big long blocks at the start of stored procedures with a big section blocked out for changes to the file and littering the code with things like: // 2011-06-14 (John Smith) Change XYZ to ABC to fix Bug #999 and: // 2009-95-12 (Bob Jones) Extracted this code to Class Foo
// <commented-out code here> The reason for this, as it was explained to me, is that it takes too long to sift through our VCS logs trying to find who changed what and why, while having it in the code file itself, either at the top or near the relevant change, makes it easy to see who changed what and when. While I see the point of that, it seems redundant and just kind of smacks of "Eh we don't really understand how to use our VCS properly, so we won't bother with that stuff at all." What do you think? Do you use both comments and the log? Just the log? Do you find that it's easier to code when you can see above a block of code that John Smith changed the method to check for XYZ a week ago, instead of having to search through logs and compare code files in a Diff tool? EDIT: Using SVN, but basically only as a repository. No branches, no merges, nothing except log + storage. | I tend to delete comments in code. And by delete, I mean, with prejudice . Unless a comment explains why a particular function does something, it goes away. Bye bye. Do not pass go. So it shouldn't surprise you that I would also delete those changelogs, for the very same reason. The problem with commented out code and comments that read like books is that you don't really know how relevant it is and it gives you a false sense of understanding as to what the code actually does. It sounds like your team doesn't have good tooling around your Version control system. Since you said you're using Subversion, I'd like to point out that there's a lot of tooling that will help you manage your subversion repository. From the ability to navigate your source through the web, to linking your changesets to specific bugs, you can do a lot that mitigates the need for these 'changelogs'. I've had plenty of people comment and say that perhaps I'm in error for deleting comments. The vast majority of code I've seen that's been commented has been bad code, and the comments have only obsfucated the problem. In fact, if I ever comment code, you can be assured that I'm asking for forgiveness from the maintanence programmer because I'm relatively certain they'll want to kill me. But lest you think I say that comments should be deleted in jest, this Daily WTF submission (from a codebase I worked on) illustrates my point perfectly: /// The GaidenCommand is a specialized Command for use by the
/// CommandManager.
///
/// Note, the word "gaiden" is Japanese and means "side story",
/// see "http://en.wikipedia.org/wiki/Gaiden".
/// Why did I call this "GaidenCommand"? Because it's very similar to
/// a regular Command, but it serves the CommandManager in a different
/// way, and it is not the same as a regular "Command". Also
/// "CommandManagerCommand" is far too long to write. I also toyed with
/// calling this the "AlephCommand", Aleph being a silent Hebrew
/// letter, but Gaiden sounded better. Oh... The stories I could tell you about that codebase, and I would, except that it's still in use by one of the largest government organizations around. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/83797",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/22390/"
]
} |
83,814 | I have an interesting, fairly common I guess, issue with one of the developers in my team. The guy is a great developer, work fast and productive, produces fairly good quality code and all. Good engineer. But there is a problem with him - very often he fails to address edge cases in his code. We spoke with him about it many times and he is trying but I guess he just doesn't think this way. So what ends up happening is that QA would find plenty issues with his code and return it back for development again and again, ultimately resulting in missed deadlines and everyone in the team unhappy. I don't know what to do with him and how to help him overcome this problem. Perhaps someone with more experience could advise? | Require him to write automated unit tests for his code. Writing unit tests forces one to think through the edge cases. Some particulars: To ensure he doesn't feel singled out, this should be instituted for your entire team. Require everyone to write automated unit tests for new code or code they modify. Require the unit test names to be descriptive as to the case they are testing. Cover the automated unit tests in the code review at a high level. Have the reviewers look for missed test cases (i.e. those edge cases he perennially misses). After some amount of feedback from his team about missed edge cases, he will probably learn to consider those before the review. Enforce this rule for the entire team: If QA finds a bug, the developer responsible owes the automated test that confirms the failure and then proves they have fixed it. (before they do any other work) | {
"source": [
"https://softwareengineering.stackexchange.com/questions/83814",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/27751/"
]
} |
83,815 | I went to a "job fair" recently and I was surprised to see how much emphasis workplaces seem to put on the programming languages candidates are familiar with. From my (admittedly limited) experience, while truly mastering a programming language may take years, learning it to a reasonable level is a fairly simple affair to someone who already has experience with other languages, and can definitely fit within the timeframe employers usually allocate for the initial ramp-up. I'd think an employer would care more about how many languages / paradigms I am familiar with, or what's my algorithmic / software design experience, as opposed to the specific technology I'm skilled with at the moment. Say I already know Java, C++, Smalltalk and Prolog... should a workplace that relies on Objective-C really consider me unqualified because I lack experience in that language? Is this a flaw in recruiting methodologies, and if it is, what can I do to convince that workplace that my lack of experience with Objective-C should not matter? I'm asking hypothetically, not specifically about the mentioned programming languages. Alternatively, my experience is limited and I admit I may be missing something. Is previous experience with a programming language more crucial than what I think it is? Does it make a difference if it's a junior or senior position? Should it make s difference? | Contrary to the press releases, it's an employer's market right now. That means they can simply be picky about what their requirements are. It means they can demand .NET 4.0 experience, and not just 3.5 experience... It means they can demand experience with Django, and not just Pylons, etc... Sure, you could learn all you need to know about Ruby in a couple of weeks, and Rails might take a couple of months (just guessing) to become proficient with... But the employer can pick through resumes of people already proficient with Ruby & Rails. TL;DR: Econ 101... Don't believe the hype about the shortage of programmers . | {
"source": [
"https://softwareengineering.stackexchange.com/questions/83815",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/8331/"
]
} |
83,837 | When working on a project, the code may be developed reasonably fast in a single day or bit by bit for a prolonged period of few weeks/months/years. As code commits are becoming to be considered as a measure of project development, it doesn't really mean it has more code written than a project that has lesser commits. So the question is when to really make a commit to the repository so that the commit is justifiable? As a add-on: Is it a correct practice to measure a project's development based on its commits? | You commit when you have reached a codebase state you want to remember. There are a lot of reasons why you might want to remember a particular codebase state, so there can't be hard-and-fast rules on when to commit. However, number of commits is definitely not a measure of quality or progress. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/83837",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/-1/"
]
} |
83,986 | I'm currently facing a dilemma with an upcoming performance review. When I started with my company around 1 year ago I tried to be as honest as possible in my perception of my programming skill set and knowledge. I based my perceived skills on my coding ability relative to those I consider to be good programmers and developers. So the salary I was provided I was happy with at the time. I have no problem with that as I believe honesty is important when going for a job and while in it. Also, I believe that once I'm in the job if I'm better than what I have given the employer to believe then my performance will reflect that. At the same another developer was hired. We were both hired to work on the same project doing the same task set. A few months back I found out that this person was on substantially more than me wage wise. Now during the project I have come to realise that this person is well below me in programming skills and even business knowledge. I have had to help him out numerous times with programming tasks, review his code and offer better solutions, and explain concepts about the project we are working on. Even though we both started at the same time. How can I go about increasing my salary (during the review) to be at least comparable with this person without directly mentioning their name or that I know what they are on. If I justify it based on the work I have done, what if they don't offer me a comparative package. Is it fair that I want more or am I being greedy? | Some of the other answers have stated something that is right on — be bold, ask for a salary increase, and show why you deserve it. However, that really doesn't address your issue — you accepted a salary that you were happy with in the context of your own experience, and you have found out someone in a comparable role (and equal or worse performance) is on a substantially higher salary. Not to throw other answers under the bus (they have generally good advice) but the advice to ignore that issue completely is going to leave you perpetually underpaid. How do you address it then? Obviously the other answers are correct that you can't make it an issue of "office politics" or make it a comparison contest — both of those will be bad for you. The fact that your coworker, who is behind you in skill/experience/whatever, earns a significantly higher salary than you is only a symptom. To figure out how to address this to your employer, you have to figure out the real problem, so look into it deeper. How did this person get hired? When a company agrees to hire a person, they must believe that the hire will be equitable. In other words, they believe that the return (the person) must be worth the investment (the salary). If your coworker was hired, then the company believed it to be an equitable investment. Unless your company is completely clueless, then they must believe that the salary for your coworker was at least somewhere in the range for what they knew about his experience/skill. In other words, either he fooled them , or his salary in fact is in the acceptable range for your position. If you really want to be paid what you feel you deserve, you need to investigate the second reason. Going in with "I want a raise, here are my accomplishments" alone is not going to solve your problem — it's generally good advice, but it will only net you the type of raise they are accustomed to giving employees who are doing good work. Not what they should give someone who is being paid way under his market value. Be prepared to explain: How you have succeeded for the company That you enjoy your work and envision a long and mutually beneficial time working there That you have learned much so far and you look forward to what else you can learn to continue to benefit the company ... and then lower the boom — your salary is not commensurate with what your value is on the market. In the United States at least, programmers are currently in a very good position to negotiate salary. In all likelihood, your coworker simply already was being paid at a higher level, asked for a salary slightly higher than that during the hiring process, and they granted it, knowing that programmers are hard to find. Be prepared to deliver at least some information about what you know the market value is for your position. Chances are good that they don't have a great idea about this, and this will net you a good increase in salary. All without mentioning your coworker or getting into office politics. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/83986",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/27796/"
]
} |
84,071 | In languages that distinguish between a "source" and "header" file (mainly C and C++), is it better to document functions in the header file: (pilfered from CCAN ) /**
* time_now - return the current time
*
* Example:
* printf("Now is %lu seconds since epoch\n", (long)time_now().tv_sec);
*/
struct timeval time_now(void); or in the source file? (pilfered from PostgreSQL) /*
* Convert a UTF-8 character to a Unicode code point.
* This is a one-character version of pg_utf2wchar_with_len.
*
* No error checks here, c must point to a long-enough string.
*/
pg_wchar
utf8_to_unicode(const unsigned char *c)
{
... Note that some things are defined in the header only, such as structs, macros, and static inline functions. I'm only talking about things that are declared in a header file and defined in a source file. Here are some arguments I can think of. I am leaning toward documenting in the source file, so my "Pro-header" arguments may be somewhat weak. Pro-header: The user doesn't need the source code to see the documentation. The source may be inconvenient, or even impossible, to acquire. This keeps interface and implementation further apart. Pro-source: It makes the header a lot shorter, giving the reader a birds-eye view of the module as a whole. It pairs the documentation of a function with its implementation, making it easier to see that a function does what it says it does. When answering, please be wary of arguments based on what tools and "modern IDEs" can do. Examples: Pro-header: Code folding can help make commented headers more navigable by hiding the comments. Pro-source: cscope 's Find this global definition feature takes you to the source file (where the definition is) rather than the header file (where the declaration is). I'm not saying don't make such arguments, but bear in mind that not everyone is as comfortable with the tools you use as you are. | My view... Document how to use the function in the header file, or more accurately close to the declaration. Document how the function works (if it's not obvious from the code) in the source file, or more accurately, close to the definition. For the birds-eye thing in the header, you don't necessarily need the documentation that close - you can document groups of declarations at once. Broadly speaking, the caller should be interested in errors and exceptions (if only so they can be translated as they propogate through the layers of abstraction) so these should be documented close to the relevant declarations. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/84071",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/3650/"
]
} |
84,128 | This is a bit "one megabyte should be enough for anyone", but... A 64-bit flat address space allows up to 4.3ish billion times more space than a 32-bit address space. That's 17,179,869,184 GiB. Obviously, the transition from 8 bits to 16 bits was pretty quick (I'm viewing things in a teen-during-the-80s way, ignoring all those mainframes and minis because they couldn't run a port of Elite). The transition from 16 bits to 32 bits took a bit longer, and 32 bits actually lasted quite a while. Now we have 64 bits, would it be stupid to say "17,179,869,184 GiB should be enough for anyone"? This is a programmers question because I really have the programming viewpoint in mind. Even if a computer some day has a seemingly infinite amount of working memory, that doesn't mean that applications will need to see a seemingly infinite flat address space. Basically, can we programmers breath a sigh of relief and say "well, whatever integers or floats may do, at least I don't have to worry about my pointers growing any more?". | I don't think we're going to have machines with more than 2^64 bytes of RAM in the foreseeable future, but that's not all that address space is useful for. For some purposes, it's useful to map other things into the address space, files being an important example. So, is it reasonable to have more than 2^64 bytes of any sort of storage attached to a computer in the foreseeable future? I'd have to say yes. There's got to be well over 2^64 bytes of storage out there, since that's only about 17 million people with terabyte hard disks. We've had multiple-petabyte databases around for a few years now, and 2^64 is only about 17 thousand petabytes. I think we're likely to have a use for a > 2^64 address space within the next few decades. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/84128",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/-1/"
]
} |
84,155 | I am thinking about a career in software engineering, but before I look for work I wanted to get an idea of what to expect particularly with pressure. This will be my first programming job (so I am looking for entry level), so I am not yet mature as a programmer yet. My question is what is the pressure like in these jobs? How high can the pressure get? If your boss gives you an assignment and it's due in two weeks but it takes you 3 will you get fired, because your unable to perform? Are you given time to learn more about the technology, develop your coding skills and grow, or are you expected to know it already and blaze through the project? If you have trouble with code are you expected to handle it yourself and work independently, or are you able to ask others for help if you are stuck? Are you expected to put in a lot of late nights to meet the deadlines? I know that this can depend on the company as well, but I just wanted some professional insight to the possible pressures of being a software developer/programmer etc. I also know that other jobs have pressure too! I just would like to know the pressure unique to software engineering. The reason I ask this question is because I had a bad experience programming once and I wanted to know if most of these jobs are the same. If software engineering/developing/programming is tough pressure that I don't want to handle are there other types of development like web development, system admin, etc that are less pressure that I can get into and still code? Thanks for reading and I look forward to hearing everyone's thoughts. | Pressure only exists if you allow it. And this statement is valid for any job or any situation. Pressure may be perceived as significant in programming profession because many of us share common characteristics such as being introverted or lacking in self-confidence. If your boss gives you an assignment and it's due in two weeks but it takes you 3 will you get fired, because your unable to perform? How come HE assign you a task and set how much time YOU must use to achieve it? Remove pressure by estimating your tasks yourself (if you are in the team, use Planning Poker ) Are you given time to learn more about the technology, develop your coding skills and grow, or are you expected to know it already and blaze through the project? Time to learn is a part of your daily job. You are expected to learn continously. Therefore, learning shouldn't be taken as a pressure. I always told to myself that learning a new technology is like adding a new tool in my belt . If you have trouble with code are you expected to handle it yourself and work independently, or are you able to ask others for help if you are stuck? Being able to ask for help is a skill every developer should have. People struggling (alone) trying to solve a bug are putting pressure on themselve. Are you expected to put in a lot of late nights to meet the deadlines? You mean the deadline set by your boss two question before? In short: learn to say NO . | {
"source": [
"https://softwareengineering.stackexchange.com/questions/84155",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/20392/"
]
} |
84,278 | I am thorough with programming and have come across languages including BASIC, FORTRAN, COBOL, LISP, LOGO, Java, C++, C, MATLAB, Mathematica, Python, Ruby, Perl, JavaScript, Assembly and so on. I can't understand how people create programming languages and devise compilers for it. I also couldn't understand how people create OS like Windows, Mac, UNIX, DOS and so on. The other thing that is mysterious to me is how people create libraries like OpenGL, OpenCL, OpenCV, Cocoa, MFC and so on. The last thing I am unable to figure out is how scientists devise an assembly language and an assembler for a microprocessor. I would really like to learn all of these stuff and I am 15 years old. I always wanted to be a computer scientist someone like Babbage, Turing, Shannon, or Dennis Ritchie. I have already read Aho's Compiler Design and Tanenbaum's OS concepts book and they all only discuss concepts and code in a high level. They don't go into the details and nuances and how to devise a compiler or operating system. I want a concrete understanding so that I can create one myself and not just an understanding of what a thread, semaphore, process, or parsing is. I asked my brother about all this. He is a SB student in EECS at MIT and hasn't got a clue of how to actually create all these stuff in the real world. All he knows is just an understanding of Compiler Design and OS concepts like the ones that you guys have mentioned (i.e. like Thread, Synchronization, Concurrency, memory management, Lexical Analysis, Intermediate code generation and so on) | Basically, your question is "how are computer chips, instruction sets, operating systems, languages, libraries, and applications designed and implemented?" That's a multi-billion dollar worldwide industry employing millions of people, many of whom are specialists. You might want to focus your question a bit more. That said, I can take a crack at: I can't understand how people create programming languages and devise compilers for it. It is surprising to me, but lots of people do look at programming languages as magical. When I meet people at parties or whatever, if they ask me what I do I tell them that I design programming languages and implement the compilers and tools, and it is surprising the number of times people -- professional programmers, mind you -- say "wow, I never thought about it, but yeah, someone has to design those things". It's like they thought that languages just spring up wholly formed with tool infrastructures around them already. They don't just appear. Languages are designed like any other product: by carefully making a series of tradeoffs amongst competing possibilities. The compilers and tools are built like any other professional software product: by breaking the problem down, writing one line of code at a time, and then testing the heck out of the resulting program. Language design is a huge topic. If you're interested in designing a language, a good place to start is by thinking about what the deficiencies are in a language that you already know. Design decisions often arise from considering a design defect in another product. Alternatively, consider a domain that you are interested in, and then design a domain-specific language (DSL) that specifies solutions to problems in that domain. You mentioned LOGO; that's a great example of a DSL for the "line drawing" domain. Regular expressions are a DSL for the "find a pattern in a string" domain. LINQ in C#/VB is a DSL for the "filter, join, sort and project data" domain. HTML is a DSL for the "describe the layout of text on a page" domain, and so on. There are lots of domains that are amenable to language-based solutions. One of my favourites is Inform7, which is a DSL for the "text-based adventure game" domain; it is probably the highest-level serious programming language I've ever seen. Pick a domain you know something about and think about how to use language to describe problems and solutions in that domain. Once you have sketched out what you want your language to look like, try to write down precisely what the rules are for determining what is a legal and illegal program. Typically you'll want to do this at three levels: lexical : what are the rules for words in the language, what characters are legal, what do numbers look like, and so on. syntactic : how do words of the language combine into larger units? In C# larger units are things like expressions, statements, methods, classes, and so on. semantic : given a syntactically legal program, how do you figure out what the program does ? Write down these rules as precisely as you possibly can . If you do a good job of that then you can use that as the basis for writing a compiler or interpreter. Take a look at the C# specification or the ECMAScript specification to see what I mean; they are chock-full of very precise rules that describe what makes a legal program and how to figure out what one does. One of the best ways to get started writing a compiler is by writing a high-level-language-to-high-level-language compiler. Write a compiler that takes in strings in your language and spits out strings in C# or JavaScript or whatever language you happen to know; let the compiler for that language then take care of the heavy lifting of turning it into runnable code. I write a blog about the design of C#, VB, VBScript, JavaScript and other languages and tools; if this subject interests you, check it out. http://blogs.msdn.com/ericlippert (historical) and http://ericlippert.com (current) In particular you might find this post interesting; here I list most of the tasks that the C# compiler performs for you during its semantic analysis. As you can see, there are a lot of steps. We break the big analysis problem down into a series of problems that we can solve individually. http://blogs.msdn.com/b/ericlippert/archive/2010/02/04/how-many-passes.aspx Finally, if you're looking for a job doing this stuff when you're older then consider coming to Microsoft as a college intern and trying to get into the developer division. That's how I ended up with my job today! | {
"source": [
"https://softwareengineering.stackexchange.com/questions/84278",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/27875/"
]
} |
84,396 | So, I just started an internship, and I'm worried that I'm asking too many questions. My mentor assigns me projects and helps me learn all the company's technologies and methodologies. However, there's so much new material for me to learn while doing this project that I have a lot of questions. I generally ask questions over instant messages or E-mail (those are the primary modes of communication for my company). I'm trying to be careful not to ask too many questions: I don't want to come off as annoying or dumb. How many questions are appropriate to ask? Once an hour? More? Less? Keep in mind, my mentor is also a fellow programmer who has his own responsibilities. | Be respectful of your mentor's time by keeping a list of questions and asking them in batches, to the extent possible. Don't actually interrupt your mentor until you literally cannot make any forward progress without help. A lot of times you'll learn a lot by struggling to find the answer yourself, even in cases where your mentor can teach you something in 10 seconds. For example, if you want to know where something is in the code, you can ask them (10 seconds), or you can spend four hours studying the code and trying to figure it out yourself. The advantage of the "four hour" option is that you will actually be learning 200 new things about the code, all of which will help you later on. Struggling to find your own answers can be a waste of time, but it can also be a way to learn a big complicated code base. Needless to say if it's a programming question that doesn't concern your company's own proprietary code, you should try to figure it out yourself using the internet. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/84396",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/21497/"
]
} |
84,514 | Almost everyone will now say the blessing: performance ! Okay, C does allow to write athletic code. But there are other languages that can do so, after all! And the optimising power of modern compilers is awesome. Does C have some advantages that no other language has? Or there's simply no need for more flexible instruments in the domain? | Almost everyone will now say the blessing: performance! That's part of it; deterministic resource use is important on devices with limited resources to begin with, but there are other reasons. Direct access to low level hardware API's. You can find a C compiler for the vast majority of these devices. This is not true for any high level language in my experience. C (the runtime and your generated executable) is "small". You don't have to load a bunch of stuff into the system to get the code running. The hardware API/driver(s) will likely be written in C or C++. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/84514",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/26603/"
]
} |
84,542 | I'm a very ambitious university student who wishes to learn pretty much everything there is to know about computers (bash me if you want, I love learning). Recently I thought it would be a fun project (albeit a lengthy one) to design and build my own kernel. I got some basic info and I've gathered that I need to master Assembly and C/C++ to really make this work. While I'm working on those, I'd like to learn HOW a kernel actually works from a programming perspective. I've spent hours browsing the linux kernel's code but that can only take you so far. What are the basic steps in building a kernel? Things you need to address? Order or doing things? I know I'm biting off a lot but I'm determined enough to handle it. | What you need to do is design the operating system. Even if, for example, you decide it should be a UNIX-like system, there are still lots of decisions to make. How much like UNIX do you want it to be? Which parts of UNIX do you like and which do you think need improvement? If you aren't set on its being UNIX-like, you end up with even more questions to answer: should processes form a tree, or are they "flat"? What kinds of inter-process communication do you want to support? Do you want it to be multi-user, or just multi-tasking (or possibly single-tasking)? Do you want it to be a real-time system? What degree of isolation do you want to provide between tasks? Where do you want it to fall on the monolithic vs. micro-kernel scale? To what degree (if any) do you want it to support distributed operation? I'd generally advise against studying the Linux kernel for your inspiration. That's nothing against the Linux kernel itself, but a simple fact that Linux is intended primarily for production use, not education. It has lots of optimization, backward compatibility hacks, etc., that are extremely useful for production but more likely to distract than educate. If you can find it, a copy of Lion's book ( Lions' Commentary on UNIX 6th Edition , with Source Code , by John Lions) is a much easier starting point. 6th Edition UNIX was still small and simple enough to read and understand fairly quickly, without being an oversimplified toy system. If you're planning to target the x86 (at least primarily) you might also want to look at MMURTL V 1.0 by Richard Burgess. This presents a system for the x86 that uses the x86 hardware much more as the CPU designers originally intended -- something most real systems eschew in favor of portability to other CPUs. As you might guess, this tends to be oriented much more heavily toward the hardware end of things. Printed copies seem to be expensive and hard to find, but you can download the text and code for free. Fortunately, there are quite a few more possibilities as well -- Operating System Design and Implementation , by Andrew Tanenbaum and Albert Woodhull, for example. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/84542",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/22067/"
]
} |
84,778 | I want to ask the user of my bash script to pass a directory path as argument. Which one of the following is a good programming practice? Require that the user enter a trailing / (forward slash) Require that a user doesn't enter a trailing / (forward slash) | Best practice is to assume neither. If you have access to path builder utilities/classes use those, if not write your code to accept either format and act accordingly. Nothing's more annoying for the user than having to remember whether to add a trailing slash or not. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/84778",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/28121/"
]
} |
84,822 | My best friend just started his internship a month ago. The problem is he is discouraged. He was a good A+ student at school, and he is feeling that he doesn't know anything at all. The issues he is working on, although they are on languages he feels comfortable in, seem so alien to him, he said. He's getting really discouraged, like he does not know the code base at all. I keep trying to tell him that it will just take time and that he is expected to have lots of questions. What should I tell him? | Keep telling him that. He just started a month ago. Knowing the language does not mean he will automatically comprehend a project that is most likely much more complex than anything from school. It takes a while to get familiar with an existing project's code, even for us pros. He needs to relax. If he has questions he should consult whatever project documentation he's got, or ask a mentor or more senior developer. This is normal!! Everyone goes through it. He'll be fine if he stops panicking. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/84822",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/28143/"
]
} |
84,866 | I've been volunteered to sit down and talk about the life and work of a Developer with a 15 year old work experience student next week. The catches are that I've got just half an hour, and I'll be just one of the people talking to her - other people in different roles in the business will also be running through the elements of their jobs with her throughout the day. What should I cover, and what on earth can I hope to teach her in just half an hour? I assume that she probably has no experience with development or programming. | I interact with high schoolers a lot, so I answer this question quite often. Keep in mind that 15 year olds are much easier to explain programming to than 50 year olds -- so you need not dumb things down or use far-fetched analogies. I usually start off with examples of what programs are: Apps like iTunes, Photoshop, Chrome, and games including console games. OSes like Windows, Mac OS, iPhone's iOS, Android. (Trust me, they'll know what you're talking about.) Programs that crunch numbers or solve really complicated math problems -- weather simulations, biological simulations, calculating pi, AI, language processing etc. Most sophisticated websites involve programming too. Transit ticket vending machines and ATMs, microwave and fridge timers, car navigation. After that, I usually go on to explain that we code programs in a formal language that the computer can recognize, often typing them up in something as simple as Notepad. The languages look like a cross between math and English, describing concepts and giving formulas and instructions for the computer to follow. Then they usually ask if I'm on the computer all the time, if this is why I'm always on Facebook, and why my eyes haven't gone bad yet. Guys ask if I know how to "hack people", and girls ask if it's good money or how many girls there are in computer science classes. After that, if they're still interested, they usually start asking specific questions that are a lot easier to answer (or at least to Wiki): things like how you would make a game, how Windows Messenger works etc. If you have a computer around, you can show-and-tell some code -- something that would have tangible effects, like a button click handler from the settings dialog box in Firefox, the main loop or physics code in a game engine, some JavaScript source from a website etc.. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/84866",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/6510/"
]
} |
84,966 | I'm starting up a Git repository for a group project. Does it make sense to store documents in the same Git repository as code - it seems like this conflicts with the nature of the git revision flow. Here is a summary of my question(s): Is the Git revisioning style going to be confusing if both code and documents are checked into the same repository ? Experiences with this? Is Git a good fit for documentation revision control? I am NOT asking if a Revision Control System in general should or shouldn't be used for documentation - it should. Thanks for the feedback so far! | We store documentation in SVN all the time. In fact, our entire user manual is written in LaTeX, and stored in SVN. We chose LaTeX specifically because it is a text-based language, and easy to show line-by-line diffs. We also store some non-text formatted files, like Microsoft Office .doc files, spread sheets, .zip files, etc, when necessary... but some of the benefit of a RCS is lost when you can't see the the incremental diffs. The key is really to make sure your documentation is well organized, so that people can find (and update) the documentation (and the source) when they need it. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/84966",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/28229/"
]
} |
85,235 | This is something I've been thinking about ever since I read this answer in the controversial programming opinions thread : Your job is to put yourself out of work. When you're writing software for your employer, any software that you create is to be written in such a way that it can be picked up by any developer and understood with a minimal amount of effort. It is well designed, clearly and consistently written, formatted cleanly, documented where it needs to be, builds daily as expected, checked into the repository, and appropriately versioned. If you get hit by a bus , laid off, fired, or walk off the job, your employer should be able to replace you on a moment's notice, and the next guy could step into your role, pick up your code and be up and running within a week tops. If he or she can't do that, then you've failed miserably. Interestingly, I've found that having that goal has made me more valuable to my employers. The more I strive to be disposable, the more valuable I become to them. And it has been discussed a bit in other questions, such as this one , but I wanted to bring it up again to discuss from a more point blank " it's a code smell!! " point of view - which hasn't really been covered in depth yet. I've been a professional developer for ten years. I've had one job where the code was well written enough to be picked up relatively quickly by any decent new developer, but in most cases in industry, it seems that a very high level of ownership (both individual and team ownership) is the norm. Most code bases seem to lack the documentation, process, and "openness" that would allow a new developer to pick them up and get working with them quickly. There always seem to be lots of unwritten little tricks and hacks that only someone who knows the code base very well ("owns" it) would know about. Of course, the obvious problem with this is: what if the person quits or "gets hit by a bus"? Or on a team level: what if the whole team gets food poisoning when they go out on their team lunch, and they all die? Would you be able to replace the team with a fresh set of new random developers relatively painlessly? - In several of my past jobs, I can't imagine that happening at all. The systems were so full of tricks and hacks that you " just have to know ", that any new team you hire would take far longer than the profitable business cycle (eg, new stable releases) to get things going again. In short, I wouldn't be surprised if the product would have to be abandoned. Obviously, losing an entire team at once would be very rare. But I think there is a more subtle and sinister thing in all this - which is the point which got me thinking to start this thread, as I haven't seen it discussed in these terms before. Basically: I think a high need for code ownership is very often an indicator of technical debt . If there is a lack of process, communication, good design, lots of little tricks and hacks in the system that you "just have to know", etc - it usually means that the system is getting into progressively deeper and deeper technical debt. But the thing is - code ownership is often presented as a kind of "loyalty" to a project and company, as a positive form of "taking responsibility" for your work - so it's unpopular to outright condemn it. But at the same time, the technical debt side of the equation often means that the code base is getting progressively less open, and more difficult to work with. And especially as people move on and new developers have to take their place, the technical debt (ie maintenance) cost starts to soar. So in a sense, I actually think that it would be a good thing for our profession if a high level of need for code ownership were openly seen as a job smell (in the popular programmer imagination). Instead of it being seen as "taking responsibility and pride" in the work, it should be seen more as "entrenching oneself and creating artificial job security via technical debt". And I think the test (thought experiment) should basically be: what if the person (or indeed, the whole team) were to vanish off the face of the Earth tomorrow. Would this be a gigantic - possibly fatal - injury to the project, or would we be able to bring in new people, get them to read the doccos and help files and play around with the code for a few days - and then be back in business in a few weeks (and back to full productivity in a month or so)? | I think we must make a difference between code ownership: from the responsibility point of view, from the code style/hacks etc. point of view. The first one must be encouraged. If nobody is responsible for low quality code and bugs, bad things will happen . It doesn't mean that the bug related to your own code must be assigned every time to you: if the code is correctly written, anybody can fix the bug in any part of the code. But if you don't have any feedback about the bugs you do, you'll produce the same bugs again and again . Code ownership may help a lot both the managers and the developers, especially the developers. If I know that 80% of bugs in the product were in my code while we were three developers working on the project, and that 75% of those bugs were related to SQL Injection, I would be more careful writing code the next time, and make an extra effort to write database queries correctly. The second one (code ownership from the code style/hacks, etc.) is mostly a bad thing. What does it bring to the product? It does not increase the quality of the product, but decreases the uniformity of the source code and makes it difficult to maintain it lately . For many large projects, people don't stay for life working on the same piece of code. And here, I don't even talk about horrible things like the developer being hit by a bus: I don't think code ownership is the first concern in this case. But lots of minor thinks happen: people migrate from development to management or other jobs, or choose other projects, or work on other stuff, or start using another language (if several languages are used inside a project), etc. A piece of code of a project can also be reused in another project, and the developer working on this other project would like to modify some parts of the code to fit the new needs, without asking for advice the former writer of the code. Is it a code smell? Well, it depends on what do you mean by code smell. For me, it's all about the overall coherence of the project and correct management . In a good project, code style guidelines are enforced to help to understand the code easier later, more experienced developers do not violate the KISS principle, thus helping the understanding of the source code later by the less experienced colleagues. To conclude, code ownership must be tracked through version control, but you must not be able to say that a piece of code is written by you or somebody else in a team . | {
"source": [
"https://softwareengineering.stackexchange.com/questions/85235",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/5064/"
]
} |
85,317 | I use interfaces rarely and find them common in others code. Also I create sub and super classes (while creating my own classes) rarely in my code. Is it a bad thing? Would you suggest changing this style? Does this style have any side-effects? Is this because I have not worked on any large projects? | There are several reasons why you might want to use interfaces: Interfaces are suited to
situations in which your
applications require many possibly
unrelated object types to provide
certain functionality. Interfaces are more flexible than
base classes because you can define
a single implementation that can
implement multiple interfaces. Interfaces are better in situations
in which you do not need to inherit
implementation from a base class. Interfaces are useful in cases where
you cannot use class inheritance.
For example, structures cannot
inherit from classes, but they can
implement interfaces. http://msdn.microsoft.com/en-us/library/3b5b8ezk(v=vs.80).aspx Interfaces are like anything else in programming. If you don't need them, don't use them. I've seen them used extensively as a matter of style, but if you don't need the special properties and capabilities that an interface provides, I don't see the benefit of using them "just because." | {
"source": [
"https://softwareengineering.stackexchange.com/questions/85317",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/86854/"
]
} |
85,446 | My company is looking to improve their market research data management. Current data management style: "Hey Jimbo, where's that picture of our WhatZit 2.0? "yeah I remember that email about that company from that guy, gimme a few minutes to search my Outlook" "who has the newest copy of the Important Competitor's product catalogue? Mine is from 2009." ... "Colleen does, and she's on maternity leave. You'll have to call her to get her workstation password..." Desired data management style: data organized neatly by topic (legal, economic, industrial, competitor) for each topic, multiple media types stored together (company product images, press releases, contact info) but still neatly sorted by type data editing histories communal access (no data silos) I was thinking about setting up a department wiki for all users to access. It seems to satisfy the four criteria above, but I'm a little concerned about how user-friendly (read: decipherable to non-technical people) it is for the more advanced features like image galleries, article formatting, and the like. Has anyone here setup a wiki for non-IT people and had it not catch on fire, become a ghost town, or look like Geocities? Bonus question: can you see any obvious drawbacks to my choice of MediaWiki (or any other wiki) for solving this problem? (I'm hoping that some of you will have encountered this issue before and can offer some insights...) | Direct answer to your question: Yes. Wikipedia has tons of non-IT editors. Longer answer: Your IT vs non-IT distinction here is a red herring . All people, IT or not, will still ignore a wiki if it isn't presented to them as something they should care about. Introducing a new data management system is always non-trivial to sell to people because you always have to make them want to change. For example, if the programmers don't see too much problem with the current bug tracking system and/or think that switching to the new one is a hassle, then they won't switch. You need to sell the new system by explaining how it improves everything, plus explain the problems with the current system, and do things to assure people that this isn't just a passing fancy and the new system is here to stay. After all, if people think it's a doomed project then it will be a doomed project. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/85446",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/9260/"
]
} |
85,479 | I have been a consultant for a small software consulting firm for quite some time now. Our normal business model is not staff augmentation, but such that we find clients who need assistance in building a solution of some kind and then send in a team who can build that solution, work with the existing IT staff, train all involved on supportting that solution, then move on to the next job. We, of course, are still around for any needed ongoing support. We have a great reputation in our area and have been very successful in implementing the solutions that we provide. However, I have noticed a common theme for most of our projects. When we get on-site, there is generally a "stressed" relationship between our team and many of the IT staff currently at the client. I understand completely that there may be some anxiety about our arrival and that defenses can come up when we are around. Many of the folks are understanding and easy to work with, but there are usually some who will not work well with us at all, and who can quickly become a project risk in many ways. We try to go in with open minds and good attitudes, and try NOT to be arrogent or condecending. We generally get deployed when there is a mess to clean up - but we understand that there were reasons decisions were made that got them in the bind they are...so we just try to determine the next step forward and move on. My question is this - I'd like to hear from the IT staff and programmers out there who have had consultants in - what are the things that consultants do that fire up negative feelings and attitudes? What can we do better to make the relationship better, not only in the beginning, but as the project moves forward? | Let the Wookiee Win Consultants who want to build and mantain good relationships with existing staff would do well to remember the sage advice from Hans Solo in Star Wars: "Let the Wookiee win" Not that the in-house staff are wookiees. Well, not all of them. The point is that if you (you being the consultant in this case) want your presence and assistance to be welcome, you cannot be a credit-grabbing glory hog who belittles the in-house staff and prior consultants. Instead, you must help the in-house staff to win, make them look good, and be generally useful, helpful, and humble . How awesome you are is reflected in not only how well you solve problems, but in how many people look forward to your return. Caveat: I am a consultant. My clients are not wookiees. It's a humorous metaphor. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/85479",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/26019/"
]
} |
85,521 | Question : Is the science and art of CS dead? By that I mean, the real requirements to think, plan and efficiently solve problems seems to be falling away from CS these days. The field seems to be lowering the entry-barrier so more people can 'program' without having to learn how to truly program. Background : I'm a recent graduate with a BS in Computer Science. I'm working a starting position at a decent sized company in the IT department. I mostly do .NET and other Microsoft technologies at my job, but before this I've done Java stuff through internships and the like. I personally am a C++ programmer for my own for-fun projects. In Depth : Through the work I've been doing, it seems to me that the intense disciplines of a real science don't exist in CS anymore. In the past, programmers had to solve problems efficiently in order for systems to be robust and quick. But now, with the prevailing technologies like .NET, Java and scripting languages, it seems like efficiency and robustness have been traded for ease of development. Most of the colleagues that I work with don't even have degrees in Computer Science. Most graduated with Electrical Engineering degrees, a few with Software Engineering, even some who came from tech schools without a 4 year program. Yet they get by just fine without having the technical background of CS, without having studied theories and algorithms, without having any regard for making an elegant solution (they just go for the easiest, cheapest solution). The company pushes us to use Microsoft technologies, which take all the real thought out of the matter and replace it with libraries and tools that can auto-build your project for you half the time. I'm not trying to hate on the languages, I understand that they serve a purpose and do it well, but when your employees don't know how a hash-table works, and use the wrong sorting methods, or run SQL commands that are horribly inefficient (but get the job done in an acceptable time), it feels like more effort is being put into developing technologies that coddle new 'programmers' rather than actually teaching people how to do things right. I am interested in making efficient and, in my opinion, beautiful programs. If there is a better way to do it, I'd rather go back and refactor it than let it slide. But in the corporate world, they push me to complete tasks quickly rather than elegantly.
And that really bugs me. Is this what I'm going to be looking forward to the rest of my life? Are there still positions out there for people who love the science and art of CS rather than just the paycheck? And on the same note, here's a good read if you haven't seen it before The Perils Of Java Schools | Yes...and No Good question, but bad assumption. The Science part of the education does seem to be lacking, but the assumption that the science was there merely to make programs efficient is misguided. The science was necessary to teach people how to define and solve problems. Sadly, this part of some "CS" curriculums (curricula?) seems to be omitted completely, replaced by toy problems with trivial or known solutions, and intended merely to teach familiarity with tools Disappointing; many Java school graduates were shortchanged, never taught how to decompose a problem, design an algorithm, specify a test or even debug effectively. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/85521",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/28401/"
]
} |
85,559 | I'm currently trying to decide what server-side language to learn and use for web development, and while it's relatively easy to get information on why x, y, or z is a good thing, it's harder to figure out the downsides to each of them. In particular, I'm curious about what drawbacks there are to learning and/or using Ruby on Rails as opposed to any other given language/framework. | Speaking from experience: The downside is that you rely on the Rails framework a bit too much. This is a great and wonderful thing if you are only ever writing simple, greenfield CRUD apps that fall squarely into the Rails "sweet spot"; your productivity will skyrocket. However, the moment you have to do something outside that sweet spot - interact with an existing database, talk to another application that doesn't have a JSON or XML API defined, implement a complicated workflow, Rails will become your enemy. It is possible to do these things with Rails, but it goes "against the grain" so you're basically on your own with figuring out how to do it as the community will usually just respond with "Don't do that, it's not the Rails way" - this results in either lost productivity or very messy code as you basically have to hack around the Rails framework. Also, there is the unspoken downside: Everything else will seem ugly and kludgy. Once you have tasted the sweet, sweet nectar of Rails (okay, evangelizing just a bit here...) everything else is swill. Going from Rails back to PHP, or ASP.NET WebForms, or Java is like walking on a bed of nails after frolicking in a lush garden; you won't see the other languages/frameworks in the same light, and while you may still appreciate them you will secretly long for Rails' loving embrace. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/85559",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/28369/"
]
} |
85,657 | I've been programming for over 9 years, and according to the advice of my first programming teacher, I always keep my main() function extremely short. At first I had no idea why. I just obeyed without understanding, much to the delight of my professors. After gaining experience, I realized that if I designed my code correctly, having a short main() function just sortof happened. Writing modularized code and following the single responsibility principle allowed my code to be designed in "bunches", and main() served as nothing more than a catalyst to get the program running. Fast forward to a few weeks ago, I was looking at Python's souce code, and I found the main() function: /* Minimal main program -- everything is loaded from the library */
...
int
main(int argc, char **argv)
{
...
return Py_Main(argc, argv);
} Yay python. Short main() function == Good code. Programming teachers were right. Wanting to look deeper, I took a look at Py_Main. In its entirety, it is defined as follows: /* Main program */
int
Py_Main(int argc, char **argv)
{
int c;
int sts;
char *command = NULL;
char *filename = NULL;
char *module = NULL;
FILE *fp = stdin;
char *p;
int unbuffered = 0;
int skipfirstline = 0;
int stdin_is_interactive = 0;
int help = 0;
int version = 0;
int saw_unbuffered_flag = 0;
PyCompilerFlags cf;
cf.cf_flags = 0;
orig_argc = argc; /* For Py_GetArgcArgv() */
orig_argv = argv;
#ifdef RISCOS
Py_RISCOSWimpFlag = 0;
#endif
PySys_ResetWarnOptions();
while ((c = _PyOS_GetOpt(argc, argv, PROGRAM_OPTS)) != EOF) {
if (c == 'c') {
/* -c is the last option; following arguments
that look like options are left for the
command to interpret. */
command = (char *)malloc(strlen(_PyOS_optarg) + 2);
if (command == NULL)
Py_FatalError(
"not enough memory to copy -c argument");
strcpy(command, _PyOS_optarg);
strcat(command, "\n");
break;
}
if (c == 'm') {
/* -m is the last option; following arguments
that look like options are left for the
module to interpret. */
module = (char *)malloc(strlen(_PyOS_optarg) + 2);
if (module == NULL)
Py_FatalError(
"not enough memory to copy -m argument");
strcpy(module, _PyOS_optarg);
break;
}
switch (c) {
case 'b':
Py_BytesWarningFlag++;
break;
case 'd':
Py_DebugFlag++;
break;
case '3':
Py_Py3kWarningFlag++;
if (!Py_DivisionWarningFlag)
Py_DivisionWarningFlag = 1;
break;
case 'Q':
if (strcmp(_PyOS_optarg, "old") == 0) {
Py_DivisionWarningFlag = 0;
break;
}
if (strcmp(_PyOS_optarg, "warn") == 0) {
Py_DivisionWarningFlag = 1;
break;
}
if (strcmp(_PyOS_optarg, "warnall") == 0) {
Py_DivisionWarningFlag = 2;
break;
}
if (strcmp(_PyOS_optarg, "new") == 0) {
/* This only affects __main__ */
cf.cf_flags |= CO_FUTURE_DIVISION;
/* And this tells the eval loop to treat
BINARY_DIVIDE as BINARY_TRUE_DIVIDE */
_Py_QnewFlag = 1;
break;
}
fprintf(stderr,
"-Q option should be `-Qold', "
"`-Qwarn', `-Qwarnall', or `-Qnew' only\n");
return usage(2, argv[0]);
/* NOTREACHED */
case 'i':
Py_InspectFlag++;
Py_InteractiveFlag++;
break;
/* case 'J': reserved for Jython */
case 'O':
Py_OptimizeFlag++;
break;
case 'B':
Py_DontWriteBytecodeFlag++;
break;
case 's':
Py_NoUserSiteDirectory++;
break;
case 'S':
Py_NoSiteFlag++;
break;
case 'E':
Py_IgnoreEnvironmentFlag++;
break;
case 't':
Py_TabcheckFlag++;
break;
case 'u':
unbuffered++;
saw_unbuffered_flag = 1;
break;
case 'v':
Py_VerboseFlag++;
break;
#ifdef RISCOS
case 'w':
Py_RISCOSWimpFlag = 1;
break;
#endif
case 'x':
skipfirstline = 1;
break;
/* case 'X': reserved for implementation-specific arguments */
case 'U':
Py_UnicodeFlag++;
break;
case 'h':
case '?':
help++;
break;
case 'V':
version++;
break;
case 'W':
PySys_AddWarnOption(_PyOS_optarg);
break;
/* This space reserved for other options */
default:
return usage(2, argv[0]);
/*NOTREACHED*/
}
}
if (help)
return usage(0, argv[0]);
if (version) {
fprintf(stderr, "Python %s\n", PY_VERSION);
return 0;
}
if (Py_Py3kWarningFlag && !Py_TabcheckFlag)
/* -3 implies -t (but not -tt) */
Py_TabcheckFlag = 1;
if (!Py_InspectFlag &&
(p = Py_GETENV("PYTHONINSPECT")) && *p != '\0')
Py_InspectFlag = 1;
if (!saw_unbuffered_flag &&
(p = Py_GETENV("PYTHONUNBUFFERED")) && *p != '\0')
unbuffered = 1;
if (!Py_NoUserSiteDirectory &&
(p = Py_GETENV("PYTHONNOUSERSITE")) && *p != '\0')
Py_NoUserSiteDirectory = 1;
if ((p = Py_GETENV("PYTHONWARNINGS")) && *p != '\0') {
char *buf, *warning;
buf = (char *)malloc(strlen(p) + 1);
if (buf == NULL)
Py_FatalError(
"not enough memory to copy PYTHONWARNINGS");
strcpy(buf, p);
for (warning = strtok(buf, ",");
warning != NULL;
warning = strtok(NULL, ","))
PySys_AddWarnOption(warning);
free(buf);
}
if (command == NULL && module == NULL && _PyOS_optind < argc &&
strcmp(argv[_PyOS_optind], "-") != 0)
{
#ifdef __VMS
filename = decc$translate_vms(argv[_PyOS_optind]);
if (filename == (char *)0 || filename == (char *)-1)
filename = argv[_PyOS_optind];
#else
filename = argv[_PyOS_optind];
#endif
}
stdin_is_interactive = Py_FdIsInteractive(stdin, (char *)0);
if (unbuffered) {
#if defined(MS_WINDOWS) || defined(__CYGWIN__)
_setmode(fileno(stdin), O_BINARY);
_setmode(fileno(stdout), O_BINARY);
#endif
#ifdef HAVE_SETVBUF
setvbuf(stdin, (char *)NULL, _IONBF, BUFSIZ);
setvbuf(stdout, (char *)NULL, _IONBF, BUFSIZ);
setvbuf(stderr, (char *)NULL, _IONBF, BUFSIZ);
#else /* !HAVE_SETVBUF */
setbuf(stdin, (char *)NULL);
setbuf(stdout, (char *)NULL);
setbuf(stderr, (char *)NULL);
#endif /* !HAVE_SETVBUF */
}
else if (Py_InteractiveFlag) {
#ifdef MS_WINDOWS
/* Doesn't have to have line-buffered -- use unbuffered */
/* Any set[v]buf(stdin, ...) screws up Tkinter :-( */
setvbuf(stdout, (char *)NULL, _IONBF, BUFSIZ);
#else /* !MS_WINDOWS */
#ifdef HAVE_SETVBUF
setvbuf(stdin, (char *)NULL, _IOLBF, BUFSIZ);
setvbuf(stdout, (char *)NULL, _IOLBF, BUFSIZ);
#endif /* HAVE_SETVBUF */
#endif /* !MS_WINDOWS */
/* Leave stderr alone - it should be unbuffered anyway. */
}
#ifdef __VMS
else {
setvbuf (stdout, (char *)NULL, _IOLBF, BUFSIZ);
}
#endif /* __VMS */
#ifdef __APPLE__
/* On MacOS X, when the Python interpreter is embedded in an
application bundle, it gets executed by a bootstrapping script
that does os.execve() with an argv[0] that's different from the
actual Python executable. This is needed to keep the Finder happy,
or rather, to work around Apple's overly strict requirements of
the process name. However, we still need a usable sys.executable,
so the actual executable path is passed in an environment variable.
See Lib/plat-mac/bundlebuiler.py for details about the bootstrap
script. */
if ((p = Py_GETENV("PYTHONEXECUTABLE")) && *p != '\0')
Py_SetProgramName(p);
else
Py_SetProgramName(argv[0]);
#else
Py_SetProgramName(argv[0]);
#endif
Py_Initialize();
if (Py_VerboseFlag ||
(command == NULL && filename == NULL && module == NULL && stdin_is_interactive)) {
fprintf(stderr, "Python %s on %s\n",
Py_GetVersion(), Py_GetPlatform());
if (!Py_NoSiteFlag)
fprintf(stderr, "%s\n", COPYRIGHT);
}
if (command != NULL) {
/* Backup _PyOS_optind and force sys.argv[0] = '-c' */
_PyOS_optind--;
argv[_PyOS_optind] = "-c";
}
if (module != NULL) {
/* Backup _PyOS_optind and force sys.argv[0] = '-c'
so that PySys_SetArgv correctly sets sys.path[0] to ''
rather than looking for a file called "-m". See
tracker issue #8202 for details. */
_PyOS_optind--;
argv[_PyOS_optind] = "-c";
}
PySys_SetArgv(argc-_PyOS_optind, argv+_PyOS_optind);
if ((Py_InspectFlag || (command == NULL && filename == NULL && module == NULL)) &&
isatty(fileno(stdin))) {
PyObject *v;
v = PyImport_ImportModule("readline");
if (v == NULL)
PyErr_Clear();
else
Py_DECREF(v);
}
if (command) {
sts = PyRun_SimpleStringFlags(command, &cf) != 0;
free(command);
} else if (module) {
sts = RunModule(module, 1);
free(module);
}
else {
if (filename == NULL && stdin_is_interactive) {
Py_InspectFlag = 0; /* do exit on SystemExit */
RunStartupFile(&cf);
}
/* XXX */
sts = -1; /* keep track of whether we've already run __main__ */
if (filename != NULL) {
sts = RunMainFromImporter(filename);
}
if (sts==-1 && filename!=NULL) {
if ((fp = fopen(filename, "r")) == NULL) {
fprintf(stderr, "%s: can't open file '%s': [Errno %d] %s\n",
argv[0], filename, errno, strerror(errno));
return 2;
}
else if (skipfirstline) {
int ch;
/* Push back first newline so line numbers
remain the same */
while ((ch = getc(fp)) != EOF) {
if (ch == '\n') {
(void)ungetc(ch, fp);
break;
}
}
}
{
/* XXX: does this work on Win/Win64? (see posix_fstat) */
struct stat sb;
if (fstat(fileno(fp), &sb) == 0 &&
S_ISDIR(sb.st_mode)) {
fprintf(stderr, "%s: '%s' is a directory, cannot continue\n", argv[0], filename);
fclose(fp);
return 1;
}
}
}
if (sts==-1) {
/* call pending calls like signal handlers (SIGINT) */
if (Py_MakePendingCalls() == -1) {
PyErr_Print();
sts = 1;
} else {
sts = PyRun_AnyFileExFlags(
fp,
filename == NULL ? "<stdin>" : filename,
filename != NULL, &cf) != 0;
}
}
}
/* Check this environment variable at the end, to give programs the
* opportunity to set it from Python.
*/
if (!Py_InspectFlag &&
(p = Py_GETENV("PYTHONINSPECT")) && *p != '\0')
{
Py_InspectFlag = 1;
}
if (Py_InspectFlag && stdin_is_interactive &&
(filename != NULL || command != NULL || module != NULL)) {
Py_InspectFlag = 0;
/* XXX */
sts = PyRun_AnyFileFlags(stdin, "<stdin>", &cf) != 0;
}
Py_Finalize();
#ifdef RISCOS
if (Py_RISCOSWimpFlag)
fprintf(stderr, "\x0cq\x0c"); /* make frontend quit */
#endif
#ifdef __INSURE__
/* Insure++ is a memory analysis tool that aids in discovering
* memory leaks and other memory problems. On Python exit, the
* interned string dictionary is flagged as being in use at exit
* (which it is). Under normal circumstances, this is fine because
* the memory will be automatically reclaimed by the system. Under
* memory debugging, it's a huge source of useless noise, so we
* trade off slower shutdown for less distraction in the memory
* reports. -baw
*/
_Py_ReleaseInternedStrings();
#endif /* __INSURE__ */
return sts;
} Good God Almighty...it is big enough to sink the Titanic. It seems as though Python did the "Intro to Programming 101" trick and just moved all of main() 's code to a different function called it something very similar to "main". Here's my question: Is this code terribly written, or are there other reasons to have a short main function? As it stands right now, I see absolutely no difference between doing this and just moving the code in Py_Main() back into main() . Am I wrong in thinking this? | You cannot export main from a library, but you can export Py_Main , and then anyone using that library can "call" Python many times with different arguments in the same program. At that point, python becomes just another consumer of the library, little more than a wrapper for the library function; it calls Py_Main just like everyone else. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/85657",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/5039/"
]
} |
85,764 | According to a popular SO post is it considered a bad practice to prefix table names. At my company every column is prefixed by a table name. This is difficult for me to read. I'm not sure the reason, but this naming is actually the company standard. I can't stand the naming convention, but I have no documentation to back up my reasoning. All I know is that reading AdventureWorks is much simpler. In this our company DB you will see a table, Person and it might have column name: Person_First_Name or maybe even Person_Person_First_Name (don't ask me why you see person 2x) Why is it considered a bad practice to pre-fix column names? Are underscores considered evil in SQL as well? Note: I own Pro SQL Server 2008 - Relation Database design and implementation. References to that book are welcome. | Underscores are not evil just harder to type. What is bad is changing standards midstream without fixing all the existing objects. Now you have personId, Person_id, etc. and can't remember which table uses the underscores or not. Consistency in naming (even if you personally don't like the names) helps make it easier to code. Personally the only place I feel the need to use the tablename in a column is on the ID column (the use of just ID is an antipattern in database design as anyone who has done extensive reporting queries can tell you. It's so much fun to rename 12 columns in your query every time you write a report.) That also makes it easier to immediately know the FKs in other tables as they have the same name. However, in a mature database, it is more work than it is worth to change an existing standard. Just accept that is the standard and move on, there are far more critical things that need to be fixed first. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/85764",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/11107/"
]
} |
85,845 | I heard about some big companies e.g. Google, Facebook use Perforce Are there any reason why SVN/Git cannot replace Perforce? | I'm reasonably proficient with svn, git, and Perforce, both as a user and at setting up and maintaining servers. For a company, or even a lone programmer like me, source control is a cost incurred in support of the real money-making activity, which is developing and selling code. So there are several factors to consider: How well does it fit with your development model? How easy is it for developers to learn and use? Are routine operations for developers fast? Is the process of using it a distraction from their real job, which is to write code? How easy is it to set up and maintain? How much does it cost to purchase and maintain? If you need help, how easy is it to get it? I'm going to skip the tl:dr detail about the pros and cons of the individual systems. Suffice it to say that when I went back to full-time consulting last year, I reviewed all three to decide which would let me make the most money as fast as possible by delivering quality software to my clients, and without requiring a lot of unpaid fooling around. When I took the political consideration of "FOSS is good and non-FOSS is evil" out of the equation, I wound up forking over for a Perforce license. And that's why big companies pick Perforce, too. Here are the tl:dr details from the comments, plus a little more. Addressing svn is easy: compared to Perforce, it's dog-slow. I worked at a company that did embedded Linux for cell phones, and our complete sources ran 9 GB; they used Perforce. Once you had the code, updating the latest sources normally took seconds on the LAN, or a couple of minutes over a VPN connection from my house. With svn, it would have been minutes and hours respectively. git vs. Perforce is more complicated. Many companies feel they have good business reasons to use a centralized repository with access control, and to make it easy to commit there and hard to do anything else - and Perforce fits that model perfectly. However, git positively encourages people to work in a local branch, and there's no way to get it to work any differently. A developer can work entirely in a local branch and never commit to the central repo - so if a company doesn't want its people working that way, Perforce is a better option. There are other problems with git for some business needs. I worked at a company that used git, and I don't know how many times I heard this discussion: "I wish we were using [some other VCS], because I need to do [this] and I can't with git." "Of course you can do that with git." "How?" "Well, first you need to write a bash script..." "Never mind." And then there's the time it takes to initialy populate a source tree that has a lot of history. With Perforce, because the history is kept on the server, you just get the latest versions of all the files, so it's really fast - even setting up that entire 9 GB tree I mentioned only took a couple of hours over a VPN. With git, it can take somewhere between a long time and an eternity. I sometimes have to clone GTK+ or the X server git repos, and that's a long lunch break, or maybe time for bed. Really, it's a matter of the right tool for the job. svn works fine for most of Apple's open source efforts, and would be awful for kernel hacking. git works great for GTK+, but is incredibly slow for working inside WebKit - the source tree and history are just too huge (as I found out the hard way working with code from WebKit's svn-to-git portal). Perforce works well if you have a giant source tree and need centralized control. Each of them works fine in the right context. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/85845",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/34401/"
]
} |
85,856 | I want to learn to program in a 'future-proof' manner, if you like. Whilst Windows dominates the desktop OS marketplace (for now), obviously there is a lot of value in learning its languages/frameworks/API's and so on - this might be subject to change as new devices emerge or Windows shoots itself in the foot (over-friendly previews of Windows 8 don't look too appealing...). Would I be right in thinking that having a solid knowledge of C/C++ for back-end logic/low level programming and the like, combined with an extremely portable language like Java for GUI's and so on, would be a good basis for software development that will prove useful on the most amount of systems? - I'm talking desktop PC's, tablets, phones. | If you want to be future proof, the best advice I can give you is not enclosing yourself into a technology. So don't learn APIs blindly. Learn how they are conceived. What are the philosophies behind the scene? What are their advantages and flaws? Think software in general, not a specific technology. You can also work on good program conception, going to OOP and AOP is a good choice IMO. But don't just understand the mechanism, truly work on the philosophy behind the mechanism. Don't neglect general computer science, like data structures and algorithms, because they are cross technology knowledge which is always useful. Also go for good practices. You often have dozen ways to do something, but most of them are crap : bug prone, hard to maintain, hard to understand later or by another programmer, etc . . . Usually, the code is harder to read than to write. So learn how to spend a little more effort on writing to make the reading easier (because you'll read code more than you write). Learn effective techniques to debug (smart use of log and debugger) and test (how to write code that can be unit tested easily and how to automate these tests). Then, you'll need a general technology background. I'm talking about very broad knowledge, like how does a processor work (cache miss or branch prediction are good start), about UNIX systems, about network protocols like IP, TCP and Ethernet, etc . . . In the end, learn how to learn. If you know how to learn, then you can adapt. You'll need some strong knowledge in specific technologies to be able to find a job, but those are outdated really quickly (think about COBOL for example, or web programming at the time of the IE/Netscape war). So don't rely on them to be future-proof. They will be key to get hired, but definitively not what makes a great programmer and what will make your skills durable over time. EDIT: If you are just starting, you should definitively get something done. Anything, really. A game like Tetris or snake is a good start, and fun. If you don't get things done, you'll spend to much time learning and really don't get the experience needed to fully understand what you learn. Let's set an example with design patterns. Design patterns are great and you should definitively use them. But if overused they'll make your code complicated and hard to understand. You'll have to face the problem that a design pattern solves and lose some time trying to solve it or its side effects to fully understand what the design pattern is about. Design patterns must be used as small refactorings over time when code grows. And you'll know when a design pattern is needed when the benefit of it is bigger than the code complexity induced by its use. This requires experience. So definitively, get things done, them learn from your mistakes. I can't insist more : GET THINGS DONE ! | {
"source": [
"https://softwareengineering.stackexchange.com/questions/85856",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/26979/"
]
} |
85,910 | I do a lot of Java programming at my work (I'm an intern) and I was wondering if it is generally a rule to create javadoc to accompany my code. I usually document every method and class anyways, but I find it hard to adhere to Javadoc's syntax (writing down the variables and the output so that the parser can generate html). I've looked at a lot of C programming and even C++ and I like the way they are commented. Is it wrong not to supply javadoc with my code? | In any writing, you write for your audience. Your audience is the maintenance developer, which may be you after 3 years after you have forgotten the details of how it all works. Single use throw away code, probably can be commented less. APIs to be consumed by other developers needs to be documented more. In no case does anyone need javadoc that only repeats the method signature (e.g. "This is a method with a return value of void and a name of HelloWorld and is invoked with no paramters") | {
"source": [
"https://softwareengineering.stackexchange.com/questions/85910",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/22067/"
]
} |
85,975 | I see a lot of crap online about how Erlang kicks node.js' ass in just about every conceivable category. So I'd like to learn Erlang, and give it a shot, but here's the problem. I'm finding that I have a much harder time picking up Erlang than I did picking up node.js. With node.js, I could pick a relatively complex project, and in a day I had something working. With Erlang, I'm running into barriers, and not going nearly as quickly. So.. for those with more experience, is Erlang complicated to learn, or am I just missing something? Node.js might not be perfect, but I seem to be able to get things done with it. | First of all, I agree with JUST MY correct OPINION's answer regarding learning Erlang. It's a mostly functional language (although concurrency plays a big role), and all of its features were added to go towards fault-tolerance and robustness, which is not exactly the same design goals as Javascript in the first place. Second of all, leaving Node.js to get into Erlang is a bit misplaced. Node.js is a single server/framework that goes its way to do everything in an event-driven manner with the help of callbacks. Erlang has its own framework (OTP), but it's not on the same level at all. If you plan on learning Erlang, I suggest my blog entry An Open Letter to the Erlang Beginner (or Onlooker) as intro reading before diving into tutorials. The only thing you can compare Erlang and Node.js in, in terms of patterns and usage is how they are event-driven. However, there are two big major differences here. Node.js' model is based on callbacks bound to events. Erlang is based on message queues and selective receives. What are the implications in there? First of all, if you do things in a callback-based manner, the only way you carry state around is to either have it global or get into continuation-passing style programming. Secondly, you have to care for the full event matrix yourself. One example of this is that if we imagine a very simple finite state machine: a mutex semaphore, event-driven. The mutex semaphore has two states: locked and free. Whenever a given unit of computation (worker, process, function or thread) wants to gain access to the mutex, it has to fire an event that tells it 'I'm interested'. Now you have to care for the following types of events: The mutex is free and you ask to obtain the lock The mutex is locked by someone else and you want to obtain the lock The mutex is locked by yourself and you want to free the mutex Then you have additional events to consider, such as timing out to avoid deadlocks: The mutex has been locked and you waited for too long, a timer to give up fires off The mutex has been locked and you waited just for too long, obtained the lock, then the timeout fired off Then you also have the out of bound events: you just locked the mutex while some worker expected it to be free. Now that worker's query has to be queued so that when it's free it's handled back You need to make all of the work asynchronous The event matrix gets complex very fast. Our FSM here only has 2 states. In the case of Erlang (or any language with selective receives and async with potentially synchronous events), you have to care about a few cases: The mutex is free and you ask to obtain the lock The mutex is locked by someone else and you want to obtain the lock The mutex is locked by yourself and you want to free the mutex And that's it. The timers are handled in the same cases as the receives are done, and for anything that has to do with 'wait until it's free', the messages are automatically queued: the worker only has to wait for a reply. The model is much, much simpler in these cases. This means that in general cases, CPS and callback-based models such as the one in node.js either ask you to be very clever in how you handle events, or ask you to take care of a whole complex event matrix in full, because you have to be called back on each inconsequential case that results from weird timing issues and state changes. Selective receives usually allow you to focus only in a subgroup of all the potential events and allow you to reason with far more ease about events in that case. Note that Erlang has a behaviour (design pattern/framework implementation) of something called gen_event . The gen_event implementation allows you to have a mechanism very similar to what's being used in node.js if that's what you want. There will be other points that differentiate them; Erlang has preemptive scheduling while node.js makes it cooperative, Erlang is more apt to some very large scale applications (distribution and all), but Node.js and its community is usually more web-apt and knowledgeable about the latest web trend. It's a question of choosing the best tool, and this will depend on your background, your type of problem, and your preferences. In my case, Erlang's model just fits my way of thinking very well. This is not necessarily the case for everyone. Hope this helps. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/85975",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/7165/"
]
} |
86,142 | I have read this questions and answers , but I still don't understand what exactly do I need to do if I dynamically link with a library that uses a LGPL license (the SDL library in my case). If I understand LGPL text correctly, I need to somehow provide the source for the library.
Is this enough?
If not, what else needs to be done? | LGPL's basic requirement is to separate the LGPL-licensed library and your own product well enough. That should allow users to supply their own version of the library instead of the one you've shipped with your software (with the bugs fixed, for instance). To accomplish this, you have two options: use the LGPL code as a shared library (so the users would just copy their binary of the library over the one you ship), or supply the source code of the whole project (so the users can copy their source of the library and recompile everything). Note, however, that mere separation is not enough, though required. You should provide your users a documented way to replace a library with their version (i.e., how to upload firmware, or to recompile a Python wrapper for an LGPL C++ library). The second notable clause is attribution requirement . This should help to promote the name of the original developer of the library, and state that what is cool software might have been developed by someone else :). In the relevant section of "About" window or a README file (if your license is Apache, this would be NOTICE file), you should list the name of the LGPL work you used. Note that I am not a lawyer, and this is not a legal advice. Note that I am also not a plumber, and this is not a sanitary advice. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/86142",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/20065/"
]
} |
86,253 | I work as a full time developer. My workplace, however, is very limiting in the technologies and programming languages I can use. All of the work is done in C++. It is clear that C++ is rapidly losing (or maybe already lost) its leading position. (please don't flame me, I have years and years of C++ experience, and I love this language, I am merely stating a fact). I have a few ideas for Java/Android projects as well as a project I would like to implement in C#. I see this as a way for me to stay current with the job market's trends and I hope that it will help me find my next job in a more up to date area. So here's the problem: My normal workday is 10-11 hours, after finishing with the kids and house chores I get about 1-2.5 hours before I am too tired to think, much less code. At that point I am going to bed frustrated, disappointed with myself for not being able to stick with my plans, and then I wake up the next morning to do it all again. I have a few hours more during the weekends but clearly I would need to do something different if I want to reach any of my goals. Is there any way for me to make better use of the time I have?
Did any of you guys have a similar problem, and had successfully resolved it? | Biggest thing: DON'T. GET. FRUSTRATED. Hang in there. Do your best. Learn what you can. Steal every minute. Enjoy the process! Second biggest thing: Think long-term. Think, "In a year from now, I want to have XYZ accomplished." When I look at what I've done over the past 6 months, I'm really impressed. But when I think about what I've gotten done tonight ... not so much. I'm in a very similar situation. I have a full time job programming (PL/SQL, .NET, Javascript). Wife, two kids, house. I just finished a project -- trevorschinesereader.com. I started it last October. It's not the greatest ting in the world but I'm really proud of it. Now that I'm done with it I've started learning iPhone programming. It's a lot of fun, and for me, it's much more about the process than about the outcome. I love learning and love gaining new skills and love building things. I try to remember that when the frustration sets in. There are a couple of things that help me: I live close to work. 7 minute commute. Gives me more time. I think on my way to work. Think about designing, about new features, etc. If nothing else, this keeps me excited and engaged with the projects I'm working on. Design during lunch or when on conference calls. Just a piece of paper and pen gets a lot of good work done. Then you have something when you go back to "work" at night. Code every day. Even just a little. Don't ever get discouraged. EVER. EVER! Don't ever think that you're moving too slowly. That will only discourage you. Just. Keep. Going. No matter how small the progress you're making. My kids go to bed around 8. I use from 8:00 to 11:00 or midnight to code. I can stay up that late b/c I don't have a long commute. Sucks being tired all the time, but for me it's worth it. Also, the wife is understanding and is ok going to bed alone. She is a saint for that. At least friday or saturday night I stay up really late (3 or 4 AM). Then sleep in and take a nap the next day while the wife takes care of kids. I try to give her a nap on the other day. One last thing: Spend time with your kids. I find myself getting frustrated with my kids on saturday because they want my ENTIRE attention for the ENTIRE day. But I want to be programming. I have to remember that they deserve my time and that them knowing their dad loves them is about a gazillion times more important than my little coding projects. Its now 11:50 PM and my wife just woke up and is asking me when I'm coming to bed. Blast! I just wasted 15 minutes writing this post. Hope it was worth it! Good night. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/86253",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/28672/"
]
} |
86,273 | In a shop that is intended to be tight-knit and supportive, should it be part of the culture that senior developers are paired with junior developers as mentors? Or should this mentoring be something that is more organic and spontaneous, i.e. not required, but allowed to develop without artificial encouragement? | I think it should be encouraged but not required; seniors shouldn't be assigned to juniors or anything like that, or else you'll end up in Dilbert-land. The "mentor-mentee" relationship requires some level of friendship at its core, as well as a healthy dose of MUTUAL respect. You don't get that by just telling to people to go off and "ment". | {
"source": [
"https://softwareengineering.stackexchange.com/questions/86273",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/14969/"
]
} |
86,502 | Possible Duplicate: When do you know it's time to move on from your current job? Problem: I had no prior experience when I interviewed, so I didn't know exactly what to ask them about the company when I was hired. I've spotted a number of warning signs and annoyances since then, such as: Four developers when I started, with everyone talking about "Ben" or "Ryan" leaving. One engineer hired thirty days before me, one hired two weeks after me. Most of the department has been hiring a large number of people since I started. Extremely limited internet access. I understand the idea from an IT point of view, but not only is Facebook blocked, but so it Youtube, Twitter, and Pandora. I've also figured out that they block all access to non-DNS websites ( http://xxx.xxx.xxx.xxx/ ) and strangely enough Miranda-IM. Low cubicles. Which is fine because I like my immediate coworkers, but they put the developers with the customer service, customer training, and QA department in a huge open room. Noise, noise, noise, and people stop to chitchat all day long. Headphones only go so far. Several emails have been sent out by my boss since I started telling us programmers to not talk about non-work-related-things like Video Games at our cubicles, despite us only spending maybe five minutes every few hours doing so. Further digging tells me that this is because someone keeps complaining that the programmers are "slacking off". People are looking over my shoulder all day. I was in the Freenode webchat to get help with a programming issue, and within minutes I had an email from my boss (to all the developers) telling us that we should NOT be connected to any outside chat servers at work. Version control system from 2005 that we must access with IE and keep the Java 1.4 JRE installed to be able to use. I accidentally updated to Java 6 one day and spent the next two days fighting with my PC to undo this "problem". No source control, no comments on anything, no standards, no code review, no unit testing, no common sense. I literally found a problem in how they handle string resource translations that stems from the simple fact that they don't trim excess white spaces, leading to developers doing: getResource("Date: ") instead of: getResource("Date") + ": ", and I was told to just add the excess white spaces back to the database instead of dealing with the issue directly. Some of these things I'd like to try to understand, but I like having IRC open to talk in a few different rooms during the day and keep in touch with friends/family over IM. They don't break my concentration (not NEARLY as much as the lady from QA stopping by to talk about her son), but because people are looking over my shoulder all day as they walk by they complain when they see something that's not "programmer-looking work". I've been told by my boss and QA that I do good, fast work. I should be judged on my work output and quality, not what I have up on my screen for the five seconds you're walking by So, my question is, even though I'm just barely at my 90 days: How do you decide to move on from a job and looking elsewhere, or when you should start working with your boss to resolve these issues? Is it even possible to get the boss to work with me in many of these things? This is the only place I heard back from even though I sent out several resume's a day for several months, and this place does pay well for putting up with their many flaws, but I'm just starting to get so miserable working here already. Should I just put up with it? Edit: I appreciate all the responses! First: I'm in the United States, and I'm salaried, and I'm about 5 days away from hitting the end of my 90 day probation. While it is my immediate boss sending out the emails I mentioned, I believe it is actually the department manager doing the majority of the arm-bending. He's also the only one with an office. My immediate boss is a nice guy, he is just complacent. As I mentioned, the Facebook/Twitter/Youtube makes sense from an IT (and management) point of view. Some people do need to have these things limited, otherwise they would waste time on them. However, on occasion, I've come across Youtube videos that very well could be useful, such as when doing Android development (which I am). It would also be useful when it comes to the "research" part of the "R&D" department I work in. It could just be my naievity here, but I feel people should be judged on their work (speed and quality), not what they happen to be doing when you walk by. If I'm getting my work done, and the work is good, then does it matter what website I'm on? Related to above, but I do understand the reasoning for limiting IM and/or IRC access while at work, and I do understand that they're paying me to be here and can dictate to me what I can and can't do. Though, again, judge me on my work not what's on my screen. Is this really so uncommon in a software development field? It is my understanding and my experience that staring at an IDE for 8 hours, typing all the while, is completely unrealistic. I can chat on IM/IRC, enjoying my time at work more simply because of this, without dumping my "short term memory", continuing to work on the problem in my head. I can't do that when Becky from QA stops to talk to me about lunch. I feel I did luck out with this job, and I'm glad I have it. This place does have it's positives: Lax on the dress code, lax on when we have to be to work (before 9am ideally, some leeway here), great benefits. But, I'd seriously take less money if it meant that I could enjoy my day more just in the simple things that, I admit, are mostly me whining about. I just feel that if I'm going to be sitting behind a PC for 8+ hours a day, I'd like it to be as enjoyable as possible. | Run away, run very, very far away. And fast. You can try to talk to your boss about the situation, but from what you've written, it sounds like there's a fundamental lack of understanding about the importance to programmers of communication with outside resources, general collaboration, and just taking your mind off your work for a minute or two. Frankly, it sounds like sweatshop for programmers. In my city, we have a tech support company (they do tech support for a bunch of big companies, kind of a domestic outsource thing) that runs the same way. It's known as the "soul-sucking job from hell," because people are treated like prisoners and they were so crazy that you could potentially get fired for going to the restroom too many times in a shift. Edit - Alright, let me make this a little more clear and deserving of the upvotes it (unexpectedly) got. Four developers when I started, with
everyone talking about "Ben" or "Ryan"
leaving. One engineer hired thirty
days before me, one hired two weeks
after me. Most of the department has
been hiring a large number of people
since I started. From the sound of it, there's a very high turnover rate. Turnover rate is actually a good indicator of the health of a company's environment. If people don't like a place, they're going to leave, it's as simple as that. While a revolving door is expected in places like retail, it's not so much desirable in an office environment (hence why places like Best Buy went through, and are going through, such radical changes as shifting to a ROWE ). From a business perspective, turnover is bad, because it's costly. It costs quite a bit of money to go through the hiring and training process for each employee. Do this four or five times a month, without anyone actually staying, and you see how this can be a problem. If a company is expecting this to happen, then it's likely one that doesn't treat its employees well (one of the reasons why retail is notorious for high turnover rates, anyone who's worked retail knows what I'm talking about). Extremely limited internet access. I
understand the idea from an IT point
of view, but not only is Facebook
blocked, but so it Youtube, Twitter,
and Pandora. I've also figured out
that they block all access to non-DNS
websites (http://xxx.xxx.xxx.xxx/) and
strangely enough Miranda-IM. This is, as others have pointed out, common in the corporate environment. Twitter is more or less as bad as Facebook. The non-DNS stuff is likely for security reasons, though I can see where it could hinder you in doing your job, depending on the details (one has to be able to access a remote web server via IP address if the domain name hasn't been set up/propagated yet, for example). YouTube is more questionable, but I think falls under "block it all, because it's easier than trusting people not to waste their time." YouTube can be both useful for doing one's job, and a huge time waster. Low cubicles. Which is fine because I
like my immediate coworkers, but they
put the developers with the customer
service, customer training, and QA
department in a huge open room. Noise,
noise, noise, and people stop to
chitchat all day long. Headphones only
go so far. Low cubes are great for fostering a collaborative environment (you're not physically walled off from others. However, sticking devs in with departments that talk as part of their job is a red flag. This indicates that management doesn't understand the need for allowing devs to have at least a quiet enough environment to be able to hear themselves think. If this was the most major problem, you could probably confront management about it and see about working something out, for the benefit of your entire department. People stopping to chat all day is a typical part of the corporate environment. Depending on who it is, you may or may not be able to dismiss them, especially as a new hire. Do that to the wrong person and you could be branded as "not a team player" and effectively ruin your chances for advancement if you stayed. Several emails have been sent out by
my boss since I started telling us
programmers to not talk about
non-work-related-things like Video
Games at our cubicles, despite us only
spending maybe five minutes every few
hours doing so. Further digging tells
me that this is because someone keeps
complaining that the programmers are
"slacking off". This is where I start getting to "GTFO" territory (with giving the OP the benefit of the doubt and it really only is a couple minutes in the day). It completely ignores the importance of taking your mind off your work for a moment and the productivity gains that come with it. It also shows that someone in company thinks that a programmer is supposed to do nothing but churn out code every second that they are on the clock. Anyone with even a cursory knowledge of how the creative process of programming works should know better. There are laws requiring employers give at least half an hour (unpaid) for lunch, and two fifteen minute breaks in a day, designed specifically for the benefit of the employee, to keep less-than-ethical employers from abusing employees. People are looking over my shoulder
all day. I was in the Freenode webchat
to get help with a programming issue,
and within minutes I had an email from
my boss (to all the developers)
telling us that we should NOT be
connected to any outside chat servers
at work. This is, in part, why I left cubicleville. Shoulder surfers are common in a corporate environment, but that doesn't make it any less distracting. Again, I can understand, to an extent, the rules against chat, but developers should still be able to have the resources to do their jobs available to them. This should include some form of communication network. Where I draw the line on this is that, instead of bringing you into the office and allowing you to explain what you were doing, as well as to explain why such rules are in place, there was a blanket demand from the top that any kind of chat was not tolerated. This says to me that management isn't interested in listening to what employees have to say, including business justifications for using such things as chat networks. When management starts getting a God complex and refuses to listen to the lifeblood of the company, then things start going downhill, quickly. Version control system from 2005 that
we must access with IE and keep the
Java 1.4 JRE installed to be able to
use. I accidentally updated to Java 6
one day and spent the next two days
fighting with my PC to undo this
"problem". Regarding old software that's resistant to updates, welcome to the corporate world. Be glad you're not expected to work on Win2k machines. That said, the way the version control is locked down, in my opinion, adds to the "they really need a lesson in tech management." While I don't expect cutting-edge tools, users shouldn't have to fight it to do their job. This fosters poor habits (who wants to use a version control system they spend more time fighting with, especially with people breathing down their neck to churn out ten thousand lines a day?), and also opens the systems up to attack vectors, because the software (particularly the Java RE) is not up to date and therefore still has security holes that would be patched if only they kept up. Good luck getting management to understand this, though (no matter where you go in big companies). IT in general to them is that arcane thing that they occasionally call when they can't get their email. Getting them to wrap their head around the idea of making the company money by taking measures to ensure they don't lose money is almost impossible unless they, themselves are an IT person (which, in upper management, is pitifully rare). No source control, no comments on
anything, no standards, no code
review, no unit testing, no common
sense. I literally found a problem in
how they handle string resource
translations that stems from the
simple fact that they don't trim
excess white spaces, leading to
developers doing: getResource("Date:
") instead of: getResource("Date") +
": ", and I was told to just add the
excess white spaces back to the
database instead of dealing with the
issue directly. This is a huge red flag, because this indicates that they don't actually care about the quality of the product that the company supposedly relies on. Even worse, they actively resist improving it . They also actively discourage anyone coming forth to improve the product. Some of these things I'd like to try
to understand, but I like having IRC
open to talk in a few different rooms
during the day and keep in touch with
friends/family over IM. They don't
break my concentration (not NEARLY as
much as the lady from QA stopping by
to talk about her son), but because
people are looking over my shoulder
all day as they walk by they complain
when they see something that's not
"programmer-looking work". I've been
told by my boss and QA that I do good,
fast work. I should be judged on my
work output and quality, not what I
have up on my screen for the five
seconds you're walking by. Again, welcome to the corporate world, where looking like you're being productive is often more important than actually being productive. Again, another thing that needs fought against, either by not working for the company (ideally, the equivalent of "voting with your wallet"), or by getting a movement going to abolish it (again, see note about ROWE). While it may be rather typical, it's a huge money waster for the company, because people often fall into what's known as presenteeism , which means they aren't actually doing anything productive, but look it, and may even be the first to arrive and last to leave, so they're the ones that get noticed as "the harder worker," "the team player," and likely the one on the management track. I do agree with the others, though, that "keeping in touch with friends/family" should be kept to a minimum. You don't need to be talking with them all day, and many people are prone to allowing those chats to take over your work time. It is, however, a blanket rule, probably thanks to those that couldn't keep it under control (or at least the fear of people doing such). One thing you could do is confront the person complaining (I suspect it's the same person that complains about programmers "slacking off"), and see what they think "programming" should look like. Why I say run away, though, isn't just about the points, themselves, that you've mentioned. As others have pointed out, a lot of it is pretty typical (though, I would note, that it says nothing about whether such things should be that way). What I see are underlying issues that make this an unhealthy work environment. Primarily, middle-upper management does not appear interested in talking with "the commoners," and their method of communication when someone does something they don't like suggests they aren't open to two-way communication. This causes a disconnect between employees and management, which will eventually lead to resentment and rebellious behavior (varying degrees, from individuals covertly doing things that are against the rules, to outright strikes/walk-outs) from the employees. Additionally, management appears to be looking to micromanage the employees. This stifles productivity in just about anyone, but especially creatives (including programmers), and again, leads to poor working conditions. This also says they don't trust their employees, and thus have to treat them like prisoners in order to "keep them in line." This is bad for any environment, because employees aren't prisoners. They're adults, and should be treated as such. People will live up to whatever standards they are held to. They also don't appear to foster actual camaraderie and collaboration, or an overall learning-oriented environment. Part of building good teams is being able to identify with your coworkers on some sort of personal level. This means talking about non-work stuff once in a while. This can also mean talking about things that don't appear to have anything to do with programming, but really can help solve the problem. This also means having enough access to the outside world to be able to do the research necessary to do the job (the key part of your department is research ), which can include using nontraditional resources. I still say it's probably a good idea to leave. You can still try to talk with your manager, but I still doubt that it will get anywhere (from my experience with large companies, getting them to change is met with a metric ton of resistance, even for people who have some clout). Even if your boss is on board, it still has to go all the way up the chain, and it gets progressively harder as the manager gets farther removed from the reality of your job. Whether it's bad enough to walk out on is a decision only you can make. In my opinion, it's not the end of the world to walk out on a job, but it's still a good idea to only do so if the job is so bad that it's affecting your mental well-being. In the US, jobs are considered "at will," which means that unless you signed a contract, you have the legal right to leave a job, without notice for any (legal) or no reason. As I said in the comments, no employer is worth sacrificing your sanity over, and I stand by comment. I, myself, have walked out on two jobs in my life, and they were among the best decisions I've made. Even as a new grad, you don't have to just take whatever you can get. Hold yourself, and potential employers, to high standards, and you can avoid places that aren't healthy for you to work at. It might take a little more time to find it, but it's well worth it. A lot of big companies like to abuse programmers, and that will never change as long as people are willing to work for those companies. Since it sounds like large corporations in general might not be your cup of tea, check out startups and small companies. They generally allow more autonomy, though you are generally held to higher standards and expected to learn quick (small businesses tend to be far more agile than big companies). I recommend checking out the book Why Works Sucks (And How to Fix It) . It talks a lot about how the way the corporate world is run is highly inefficient and not conducive to actual productivity (not to mention how it can be downright destructive to one's life). It also covers a lot of the things you've mentioned in your question. (On a side note, we've been mod-hammered, so if anyone wants to continue the discussion, I created a chat room for this and it can be found here. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/86502",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/28749/"
]
} |
86,589 | A knowledgeable friend recently looked at a website I helped launch, and commented something like "very cool site, shame about the inline scripting in the source code". I'm definitely in a position to remove the inline scripting where it occurs; I'm vaguely aware that it's "a bad thing". My question is: what are the real problems with inline scripting? Is there a significant performance issue, or is it mostly just a matter of good style? Can I justify immediate action on the inline scripting front to my superiors, when there are other things to work on that might have a more obvious impact on the site? If you pulled up to a website, and took a peek at the source code, what factors would lead you to say "hmm, professional work here", and what would cause you to recoil from an obviously amateurish job? Okay, that question turned into multiple questions in the writing. But basically, inline scripting - what's the deal? | what are the real problems with inline scripting? Is there a significant performance issue, or is it mostly just a matter of good style? The advantages are not performance based, they are (as Michael pointed out in his comment) more to do with separation of the view and the controller. The HTML/CSS file should, ideally, contain only the presentation and separate files should be used for scripts. That makes it easier for you (and your peers) to read and maintain both the visual and functional aspects. Can I justify immediate action on the inline scripting front to my superiors, when there are other things to work on that might have a more obvious impact on the site? No, probably not. It can be very hard to convince the powers that be of pure maintenance work, even if it you believe it will save them money in the long run. In this case though, I don't think it is so important that you should stop everything and get rid of your inline scripting. Instead, just make sure that you make a conscious effort to rectify areas as you work on them for other reasons. Refactoring code should be something you do regularly but only a little bit at a time. If you pulled up to a website, and took a peek at the source code, what factors would lead you to say "hmm, professional work here", and what would cause you to recoil from an obviously amateurish job? The number one factor that would tell me it is not professional is the overuse of tables or divs. Here is an article explaining why neither should be overused. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/86589",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/8361/"
]
} |
86,636 | In my current project (a game, in C++), I decided that I would use Test Driven Development 100% during development. In terms of code quality, this has been great. My code has never been so well designed or so bug-free. I don't cringe when viewing code I wrote a year ago at the start of the project, and I have gained a much better sense for how to structure things, not only to be more easily testable, but to be simpler to implement and use. However... it has been a year since I started the project. Granted, I can only work on it in my spare time, but TDD is still slowing me down considerably compared to what I'm used to. I read that the slower development speed gets better over time, and I definitely do think up tests a lot more easily than I used to, but I've been at it for a year now and I'm still working at a snail's pace. Each time I think about the next step that needs work, I have to stop every time and think about how I would write a test for it, to allow me to write the actual code. I'll sometimes get stuck for hours, knowing exactly what code I want to write, but not knowing how to break it down finely enough to fully cover it with tests. Other times, I'll quickly think up a dozen tests, and spend an hour writing tests to cover a tiny piece of real code that would have otherwise taken a few minutes to write. Or, after finishing the 50th test to cover a particular entity in the game and all aspects of it's creation and usage, I look at my to-do list and see the next entity to be coded, and cringe in horror at the thought of writing another 50 similar tests to get it implemented. It's gotten to the point that, looking over the progress of the last year, I'm considering abandoning TDD for the sake of "getting the damn project finished". However, giving up the code quality that came with it is not something I'm looking forward to. I'm afraid that if I stop writing tests, then I'll slip out of the habit of making the code so modular and testable. Am I perhaps doing something wrong to still be so slow at this? Are there alternatives that speed up productivity without completely losing the benefits? TAD? Less test coverage? How do other people survive TDD without killing all productivity and motivation? | Let me begin by thanking you to share your experience and voicing out your concerns... which I have to say are not uncommon. Time/Productivity: Writing tests is slower than not writing tests. If you scope it to that, I'd agree. However if you run a parallel effort where you apply a non-TDD approach, chances are that the time you spend break-detect-debug-and-fix existing code will put you in the net negative. For me, TDD is the fastest I can go without compromising on my code-confidence. If you find things in your method, that are not adding value, eliminate them. Number of tests: If you code up N things, you need to test N things. to paraphrase one of Kent Beck's lines " Test only if you would want it to work. " Getting stuck for hours: I do too (sometimes and not > 20 mins before I stop the line).. It's just your code telling you that the design needs some work. A test is just another client for your SUT class. If a test is finding it difficult to use your type, chances are so will your production clients. Similar tests tedium : This needs some more context for me to write up a counterargument. That said, Stop and think about the similarity. Can you data-drive those tests somehow? Is it possible to write tests against a base-type? Then you just need to run the same set of tests against each derivation. Listen to your tests. Be the right kind of lazy and see if you can figure out a way to avoid tedium. Stopping to think about what you need to do next (the test/spec) isn't a bad thing. On the contrary, it's recommended so that you build "the right thing". Usually if I can't think of how to test it, I usually can't think of the implementation either. It's a good idea to blank out implementation ideas till you get there.. maybe a simpler solution is overshadowed by a YAGNI-ish pre-emptive design. And that brings me to the final query : How do I get better ? My (or An) answer is Read, Reflect and Practice. e.g. Of late, I keep tabs on whether my rhythm reflects RG[Ref]RG[Ref]RG[Ref] or is it RRRRGRRef. % time spent in the Red / Compile Error state. Am I stuck in a Red/Broken builds state? | {
"source": [
"https://softwareengineering.stackexchange.com/questions/86636",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/28923/"
]
} |
86,754 | There is some code which is GPL or LGPL that I am considering using for an iPhone project. If I took that code (JavaScript) and rewrote it in a different language for use on the iPhone would that be a legal issue? In theory the process that has happened is that I have gone through each line of the project, learnt what it is doing, and then reimplemented the ideas in a new language. To me it seems this is like learning how to implement something, but then reimplementing it separately from the original licence. Therefore you have only copied the algorithm, which arguably you could have learnt from somewhere else other than the original project. Does the licence cover the specific implementation or the algorithm as well? EDIT------ Really glad to see this topic create a good conversation. To give a bit more backing to the project, the code involved does some kind of audio analysis. I believe it is non-trivial to learn or implement, although I was prepared to embark on this task (I'm at the level where I can implement an FFT algorithm, and this was going to go beyond that.) It is a fairly low LOC script, so I didn't think it would be too hard to do a straight port. I really like the idea of rereleasing my port as well as using it in the application. I don't see any problem with that, and it would be a great way to give something back to the community. I was going to add a line about not wanting to discuss the moral issues, but I'm quite glad I didn't as it seems to have fired the debate a bit. I still feel a bit odd about using open source code to learn from. Does this mean that anything one learns from an open source project is not allowed to be used in a closed source project? And how long after or different does an implementation have to be to not be considered violation of the licence? Murky! EDIT 2 -------- Follow up question | I am not a lawyer. This is not legal advice. That said, taking every line of an application and changing it slightly for the sole purpose of circumventing copyright law is blatantly, obviously, creating a derived work with no plausible defense whatsoever. Even the boughtest judge and jury will definitely find against you if you ever get dragged into court. Just as a comparison: companies who do need to rewrite something for interoperability usually hire different sets of people to understand the source, and to create the port ("clean-room implementation"), so that no one can accuse them of creating a derived work. What you propose is the exact opposite. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/86754",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/1718/"
]
} |
86,795 | Since there is an algorithm to blur images, so that part of it cannot be recognised, can we reverse the algorithm and unblur part of than image? Is there a program that already does that, is that even possible, even in a near future? | Deconvolution (also see here and here ) can partially deblur a photo. There is plenty of software out there that implements it, and this was even a fairly basic excersise in an image processing class I took in College. It's not possible to completely reverse the blurring, since it is lossy, but a lot of information can be restored (also see here (PDF)). A motion blurred photo will be easier to restore than something that's simply out of focus, though both can be restored to a degree. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/86795",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/10282/"
]
} |
86,850 | My team relies a lot on colour within our code to outline features that need to be worked on (we colour lines of code that need attention). We have a close friend who is colourblind and wants to join our team. What can we do to highlight what needs work without using colour? We have about 25 people on the team that are all accustomed to the line colouring system and we have found it to be most efficient. | Show him the colors you use to highlight code, and have him tell you which ones he can't tell apart. Then change those colors to ones he can work with. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/86850",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/10964/"
]
} |
86,852 | I'm hoping there's a book or something out there for me to get... If I have a class that has Collection as an instance variable, what is that method of coding called? A design pattern? If so, where can I find more information on it? As I've been working with this mentor, he's really helped me understand my programming weakness and that weakness is thinking in terms of collections or relationships between objects. It just seems so difficult for me right now and I need to read to become smarter. My mentor is a great guy, but I'm starting to feel like I need to learn more on my own. public class Evaluation
{
private List<Criterion> criterion = null;
public Evaluation()
{
criterion = new List<Criterion>();
}
} | Composition it's not a design pattern, it a fundamental practice of OOP (as opposed to having your class inherit from the collection class) | {
"source": [
"https://softwareengineering.stackexchange.com/questions/86852",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/-1/"
]
} |
86,904 | Has anyone thought about why so many of us repeat this same pattern using the same variable names? for (int i = 0; i < foo; i++) {
// ...
} It seems most code I've ever looked at uses i , j , k and so on as iteration variables. I suppose I picked that up from somewhere, but I wonder why this is so prevalent in software development. Is it something we all picked up from C or something like that? Just an itch I've had for a while in the back of my head. | i and j have typically been used as subscripts in quite a bit of math for quite some time (e.g., even in papers that predate higher-level languages, you frequently see things like "X i,j ", especially in things like a summation). When they designed Fortran, they (apparently) decided to allow the same, so all variables starting with "I" through "N" default to integer, and all others to real (floating point). For those who've missed it, this is the source of the old joke "God is real (unless declared integer)". Most people seem to have seen little reason to change that. It's widely known and understood, and quite succinct. Every once in a while you see something written by some psychotic who thinks there's a real advantage to something like: for (int outer_index_variable=0; outer_index_variable < 10; outer_index_variable++)
for (int inner_index_variable=0; inner_index_variable < 10; inner_index_variable++)
x[outer_index_variable][inner_index_variable] = 0; Thankfully this is pretty rare though, and most style guides now point out that while long, descriptive variable names can be useful, you don't always need them, especially for something like this where the variable's scope is only a line or two of code. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/86904",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/28905/"
]
} |
87,077 | I've written an XML text editor that provides 2 view options for the same XML text, one indented (virtually), the other left-justified. The motivation for the left-justified view is to help users 'see' the whitespace characters they're using for indentation of plain-text or XPath code without interference from indentation that is an automated side-effect of the XML context. I want to provide visual clues (in the non-editable part of the editor) for the left-justified mode that will help the user, but without getting too elaborate. I tried just using connecting lines, but that seemed too busy. The best I've come up with so far is shown in a mocked up screenshot of the editor below, but I'm seeking better/simpler alternatives (that don't require too much code). [Edit] Taking the heatmap idea (from: @jimp) I get this and 3 alternatives - labelled a, b and c: The following section describes the accepted answer as a proposal, bringing together ideas from a number of other answers and comments. As this question is now community wiki, please feel free to update this. NestView The name for this idea which provides a visual method to improve the readability of nested code without using indentation. Contour Lines The name for the differently shaded lines within the NestView The image above shows the NestView used to help visualise an XML snippet. Though XML is used for this illustration, any other code syntax that uses nesting could have been used for this illustration. An Overview: The contour lines are shaded (as in a heatmap) to convey nesting level The contour lines are angled to show when a nesting level is being either opened or closed. A contour line links the start of a nesting level to the corresponding end. The combined width of contour lines give a visual impression of nesting level, in addition to the heatmap. The width of the NestView may be manually resizable, but should not change as the code changes. Contour lines can either be compressed or truncated to keep acheive this. Blank lines are sometimes used code to break up text into more digestable chunks. Such lines could trigger special behaviour in the NestView. For example the heatmap could be reset or a background color contour line used, or both. One or more contour lines associated with the currently selected code can be highlighted. The contour line associated with the selected code level would be emphasized the most, but other contour lines could also 'light up' in addition to help highlight the containing nested group Different behaviors (such as code folding or code selection) can be associated with clicking/double-clicking on a Contour Line. Different parts of a contour line (leading, middle or trailing edge) may have different dynamic behaviors associated. Tooltips can be shown on a mouse hover event over a contour line The NestView is updated continously as the code is edited. Where nesting is not well-balanced assumptions can be made where the nesting level should end, but the associated temporary contour lines must be highlighted in some way as a warning. Drag and drop behaviors of Contour Lines can be supported. Behaviour may vary according to the part of the contour line being dragged. Features commonly found in the left margin such as line numbering and colour highlighting for errors and change state could overlay the NestView. Additional Functionality The proposal addresses a range of additional issues - many are outside the scope of the original question, but a useful side-effect. Visually linking the start and end of a nested region The contour lines connect the start and end of each nested level Highlighting the context of the currently selected line As code is selected, the associated nest-level in the NestView can be highlighted Differentiating between code regions at the same nesting level In the case of XML different hues could be used for different namespaces. Programming languages (such as c#) support named regions that could be used in a similar way. Dividing areas within a nesting area into different visual blocks Extra lines are often inserted into code to aid readability. Such empty lines could be used to reset the saturation level of the NestView's contour lines. Multi-Column Code View Code without indentation makes the use of a multi-column view more effective because word-wrap or horizontal scrolling is less likely to be required. In this view, once code has reach the bottom of one column, it flows into the next one: Usage beyond merely providing a visual aid As proposed in the overview, the NestView could provide a range of editing and selection features which would be broadly in line with what is expected from a TreeView control. The key difference is that a typical TreeView node has 2 parts: an expander and the node icon. A NestView contour line can have as many as 3 parts: an opener (sloping), a connector (vertical) and a close (sloping). On Indentation The NestView presented alongside non-indented code complements, but is unlikely to replace, the conventional indented code view. It's likely that any solutions adopting a NestView, will provide a method to switch seamlessly between indented and non-indented code views without affecting any of the code text itself - including whitespace characters. One technique for the indented view would be 'Virtual Formatting' - where a dynamic left-margin is used in lieu of tab or space characters. The same nesting-level data used to dynamically render the NestView could also used for the more conventional-looking indented view. Printing Indentation will be important for the readability of printed code. Here, the absence of tab/space characters and a dynamic left-margin means that the text can wrap at the right-margin and still maintain the integrity of the indented view. Line numbers can be used as visual markers that indicate where code is word-wrapped and also the exact position of indentation: Screen Real-Estate: Flat Vs Indented Addressing the question of whether the NestView uses up valuable screen real-estate: Contour lines work well with a width the same as the code editor's character width. A NestView width of 12 character widths can therefore accommodate 12 levels of nesting before contour lines are truncated/compressed. If an indented view uses 3 character-widths for each nesting level then space is saved until nesting reaches 4 levels of nesting, after this nesting level the flat view has a space-saving advantage that increases with each nesting level. Note: A minimum indentation of 4 character widths is often recommended for code, however XML often manages with less. Also, Virtual Formatting permits less indentation to be used because there's no risk of alignment issues A comparison of the 2 views is shown below: Based on the above, its probably fair to conclude that view style choice will be based on factors other than screen real-estate. The one exception is where screen space is at a premium, for example on a Netbook/Tablet or when multiple code windows are open. In these cases, the resizable NestView would seem to be a clear winner. Use Cases Examples of real-world examples where NestView may be a useful option: Where screen real-estate is at a premium a. On devices such as tablets, notepads and smartphones b. When showing code on websites c. When multiple code windows need to be visible on the desktop simultaneously Where consistent whitespace indentation of text within code is a priority For reviewing deeply nested code. For example where sub-languages (e.g. Linq in C# or XPath in XSLT) might cause high levels of nesting. Accessibility Resizing and color options must be provided to aid those with visual impairments, and also to suit environmental conditions and personal preferences: Compatability of edited code with other systems A solution incorporating a NestView option should ideally be capable of stripping leading tab and space characters (identified as only having a formatting role) from imported code. Then, once stripped, the code could be rendered neatly in both the left-justified and indented views without change. For many users relying on systems such as merging and diff tools that are not whitespace-aware this will be a major concern (if not a complete show-stopper). Other Works: Visualisation of Overlapping Markup Published research by Wendell Piez , dated from 2004, addresses the issue of the visualisation of overlapping markup, specifically LMNL . This includes SVG graphics with significant similarities to the NestView proposal, as such, they are acknowledged here. The visual differences are clear in the images (below), the key functional distinction is that NestView is intended only for well-nested XML or code, whereas Wendell Piez's graphics are designed to represent overlapped nesting. The graphics above were reproduced - with kind permission - from http://www.piez.org Sources: Towards Hermenutic Markup Half-steps toward LMNL | I've attempted to answer my own question here, but this is incorporating the heatmap idea from @jimp and also the 'make it more XML-ish' idea from @Andrea: Hopefully, the colors in the heat map along with the angular lines help draw the eye between the start and end tags; removing the horizontal line separators improves the 'flow' from start to end. As the user selects with an element the matching part in the heat map can be highlighted in some way - perhaps with a glowing border (as shown). Edit Have decided to go with this, there will probably have to be user options for the colours. A 'production ready' screenshot: And for comparison...the alternate indented view: Edit Now, for the more heavily nested case - testing my drawing skills... | {
"source": [
"https://softwareengineering.stackexchange.com/questions/87077",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/27103/"
]
} |
87,217 | When looking at comparisons, it seems to me that there could be a 1:1 mapping between their feature sets. Yet, an often cited statement is that "Mercurial is easier". What is the basis of this statement? (if any) | Case in point: Lets say that you want to change the username on all your previous commits. I've needed to do this several times for various reason. Git Version git filter-branch --commit-filter '
if [ "$GIT_COMMITTER_NAME" = "<Old Name>" ];
then
GIT_COMMITTER_NAME="<New Name>";
GIT_AUTHOR_NAME="<New Name>";
GIT_COMMITTER_EMAIL="<New Email>";
GIT_AUTHOR_EMAIL="<New Email>";
git commit-tree "$@";
else
git commit-tree "$@";
fi' HEAD Mercurial Version: authors.convert.list file: <oldname>=<newname> Command line: hg convert --authors authors.convert.list SOURCE DEST Now, which one looks easier to use? Note: I spent 2 years working solely with Git, so this isn't a "I hate it, I didn't get it in 2 seconds" rant. For me, it's the usability. Git is very linux oriented with a linux way of doing things. That means command line, man pages, and figuring it out for yourself. It had a very poor GUI (note: I'm basing this off of msysGit from about a year ago), that seemed to just get in my way. I could barely use it The command line was worse. Being a Linux oriented program, on Windows it was very difficult to use. Instead of a native port they simply wrapped git with MinGW (Think cygwin), which made working with it much more difficult. MinGW isn't Windows Command Prompt, and just acts different. It's crazy that this is the only way to work with Git. Even in Linux it seemed the only way was to work with straight command line. Projects like RabbitVCS helped some, but weren't very powerful. The command line oriented approach and being a linux program meant that almost all the howto guides, help documentation, and forum/QA questions relied on running monstrous commands like above. The basic SCM commands (commit, pull, push) aren't as complex, but any more and complexity grows exponentially. I also hate the one place that lots of OSS git users seem to hang around: Github. When you first go to a github page, it slams you with everything you can possibly do. To me, a projects git page looks chaotic, scary, and overly powerful. Even the explanation of what the project is, is pushed down to the bottom. Github really hurts people who don't have a full website already setup. Its issue tracker is also terrible and confusing. Feature overload. Git users also seemed to be very cult like. Git users seem to always be the ones starting "holy wars" over which DVCS is better, which then forces Mercurial users to defend themselves. Sites like http://whygitisbetterthanx.com/ show arrogance and an almost "Use my software or die" mentality. Many times I've gone into various places of help only to be flamed for not knowing X, using X beforehand, using Windows, etc. It's crazy. Mercurial on the other hand seems to go towards the kinder approach. Their own home page seems much more friendly to new users than Git's . In a simple Google search the 5th result is TortoiseHg, a very nice GUI for Mercurial. Their entire approach seems to be simplicity first, power later. With Mercurial I don't have SSH nonsense (SSH is hell on Windows), I don't have stupidly complex commands, I don't have a cult user following, I don't have craziness. Mercurial just works. TortoiseHg provides an actually usable interface (although lately it seems to be growing) that provides actually useful features. Options are limited to what you need, removing clutter and options that are rarely used. It also provides many decent defaults Mercurial, being very friendly to new comers, was very easy to pick up. Even some of the more complex topics like the different branching model and history editing were very easy to follow. I picked up Mercurial quickly and painlessly. Mercurial also just works the first time with little setup. On ANY OS I can just install TortoiseHg and get all the features I want (mainly context menu commands) without having to hunt for different Guis. Also missing is setting up SSH (half of the guides out there say to use Putty, Plink, and Pagent while the other half says to use ssh-keygen). For new users, TortoiseHg takes minutes to setup while Git takes 30 minutes to an hour with lots of googling. Lastly you have the online repos. Githubs equivalent is BitBucket, which has some of the issues I outlined above. However there's also Google Code. When I go to a Google Code project , I don't get feature overload, I get a nice clean interface. Google Code is more of a online repo/website combo, which really helps OSS projects who don't have an existing site setup. I would feel very comfortable using Google Code as my projects website for quite some time, only building a website when absolutely necessary. Its issue tracker is also powerful, fitting nicely in between Github's almost useless Issue Tracker and Bugzilla's monstrosity . Mercurial just works, first time, every time. Git gets in my way and only angers me the more I use it. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/87217",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/3530/"
]
} |
87,250 | I am a big fan of writing assert checks in C++ code as a way to catch cases during development that cannot possibly happen but do happen because of logic bugs in my program. This is a good practice in general. However, I've noticed that some functions I write (which are part of a complex class) have 5+ asserts which feels like it could potentially be a bad programming practice, in terms of readability and maintainability. I think it's still great, as each one requires me to think about pre- and post-conditions of functions and they really do help catch bugs. However, I just wanted to put this out there to ask if there is a better paradigms for catching logic errors in cases when a large number of checks is necessary. Emacs comment : Since Emacs is my IDE of choice, I have it slightly gray out the assert statements which helps reduce the feeling of clutter that they can provide. Here's what I add to my .emacs file: ; gray out the "assert(...)" wrapper
(add-hook 'c-mode-common-hook
(lambda () (font-lock-add-keywords nil
'(("\\<\\(assert\(.*\);\\)" 1 '(:foreground "#444444") t)))))
; gray out the stuff inside parenthesis with a slightly lighter color
(add-hook 'c-mode-common-hook
(lambda () (font-lock-add-keywords nil
'(("\\<assert\\(\(.*\);\\)" 1 '(:foreground "#666666") t))))) | I've seen hundreds of bugs that would have been solved faster if someone had written more asserts, and not a single one that would have been solved quicker by writing fewer . [C]ould [too many asserts] potentially be a bad programming practice, in terms of readability and maintainability[?] Readability could be a problem, perhaps - although it's been my experience that people who write good asserts also write readable code. And it never bothers me to see the beginning of a function start with a block of asserts to verify that the arguments aren't garbage - just put a blank line after it. Also in my experience, maintainability is always improved by asserts, just as it is by unit tests. Asserts provide a sanity check that code is being used the way it was intended to be used. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/87250",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/25589/"
]
} |
87,321 | I know in advance that people are going to see this question and think "free Red Bull." But I am actually looking for the best way to tie rewards for developers to the company's long-term goals. For example, assuming a team is working on the same software product, would it be best to reward each developer based on the condition of the final product? They are a team after all, and this will ensure that they are all working towards the common goal of getting the product out. However, this ignores the fact that some developers are stronger than others and some work harder than others. In your experience, what is the best way to incentivize a team of developers? **UPDATE I really appreciate the strong response I've received to this questions. I thought to ask if after watching the movie Inside Job , which is about the causes of the recent economic crisis. One of the major factors the movie cites is that there is a poor incentive system on Wall St. Investors are rewarded for making money in the short term, even if their actions can be disastrous down the road. I think this same concept applies well to developers. There is a short term gain in getting a product out as fast possible, but there can be major long term headaches if that product is buggy or if it doesn't port well to other environments. Ideally, any company in any industry should want an incentive system that ensures the long term stability of their products. | I'm afraid that I'm going to have to disagree with many of the answers to this question, as none of them have mentioned the difference between intrinsic and extrinsic motivation . From the Wikipedia page: Intrinsic motivation refers to motivation that is driven by an interest or enjoyment in the task itself, and exists within the individual rather than relying on any external pressure. Extrinsic motivation comes from outside of the individual. Common extrinsic motivations are rewards like money and grades, coercion and threat of punishment. Competition is in general extrinsic because it encourages the performer to win and beat others, not to enjoy the intrinsic rewards of the activity. A crowd cheering on the individual and trophies are also extrinsic incentives. According to research, intrinsic motivators are much more powerful than extrinsic motivators: At lower levels of Maslow's hierarchy of needs , such as physiological needs, money is a motivator, however it tends to have a motivating effect on staff that lasts only for a short period (in accordance with Herzberg's two-factor model of motivation). At higher levels of the hierarchy, praise, respect, recognition, empowerment and a sense of belonging are far more powerful motivators than money Now while there is little evidence of Maslow's hierarchy itself, it is a useful hook when describing intrinsic verses extinsic motivators. The surprising thing that comes out of the research though, is that providing extrinsic motivators can actually reduce or remove the intrinsic motivators: Social psychological research has indicated that extrinsic rewards can lead to overjustification and a subsequent reduction in intrinsic motivation. In one study demonstrating this effect, children who expected to be (and were) rewarded with a ribbon and a gold star for drawing pictures spent less time playing with the drawing materials in subsequent observations than children who were assigned to an unexpected reward condition and to children who received no extrinsic reward. In general it is far more effective to remove barriers intrinsic motivation than it is to try an increase extrinsic motivators. This was the essence of many elements of both DeMarco & Lister 's Peopleware and Fred Brooks ' The Mythical Man-Month , which should be considered essential reading for any manager of software engineers. For more information, I would highly recommend this animation of one of Daniel Pink 's talks on his book " Drive: The Surprising Truth About What Motivates Us ". I haven't read his book yet, but this brief talk gels with my own experience sufficiently well that's it's now at the top of my reading list.* So, in conclusion: Don't worry too much about which extrinsic rewards you can use to motivate your team members. Remove barriers to intrinsic motivation and your team will motivate themselves . | {
"source": [
"https://softwareengineering.stackexchange.com/questions/87321",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/7434/"
]
} |
87,330 | With a simple search in amazon one can see that the modern approach for parallel programming is to use your graphic card. However I am still a little bit skeptical about it. My last computer has an 8 core CPU which I need is enough for basic all my parallel needs, if I need more I will probably use MPI through a network using my old machines. All in all, Why and/or when should I use CUDA or another method which uses my graphic card instead of traditional methods like pthreads, java threads, boost threads or the new C++ 11 threads? What about using processes? | I'm afraid that I'm going to have to disagree with many of the answers to this question, as none of them have mentioned the difference between intrinsic and extrinsic motivation . From the Wikipedia page: Intrinsic motivation refers to motivation that is driven by an interest or enjoyment in the task itself, and exists within the individual rather than relying on any external pressure. Extrinsic motivation comes from outside of the individual. Common extrinsic motivations are rewards like money and grades, coercion and threat of punishment. Competition is in general extrinsic because it encourages the performer to win and beat others, not to enjoy the intrinsic rewards of the activity. A crowd cheering on the individual and trophies are also extrinsic incentives. According to research, intrinsic motivators are much more powerful than extrinsic motivators: At lower levels of Maslow's hierarchy of needs , such as physiological needs, money is a motivator, however it tends to have a motivating effect on staff that lasts only for a short period (in accordance with Herzberg's two-factor model of motivation). At higher levels of the hierarchy, praise, respect, recognition, empowerment and a sense of belonging are far more powerful motivators than money Now while there is little evidence of Maslow's hierarchy itself, it is a useful hook when describing intrinsic verses extinsic motivators. The surprising thing that comes out of the research though, is that providing extrinsic motivators can actually reduce or remove the intrinsic motivators: Social psychological research has indicated that extrinsic rewards can lead to overjustification and a subsequent reduction in intrinsic motivation. In one study demonstrating this effect, children who expected to be (and were) rewarded with a ribbon and a gold star for drawing pictures spent less time playing with the drawing materials in subsequent observations than children who were assigned to an unexpected reward condition and to children who received no extrinsic reward. In general it is far more effective to remove barriers intrinsic motivation than it is to try an increase extrinsic motivators. This was the essence of many elements of both DeMarco & Lister 's Peopleware and Fred Brooks ' The Mythical Man-Month , which should be considered essential reading for any manager of software engineers. For more information, I would highly recommend this animation of one of Daniel Pink 's talks on his book " Drive: The Surprising Truth About What Motivates Us ". I haven't read his book yet, but this brief talk gels with my own experience sufficiently well that's it's now at the top of my reading list.* So, in conclusion: Don't worry too much about which extrinsic rewards you can use to motivate your team members. Remove barriers to intrinsic motivation and your team will motivate themselves . | {
"source": [
"https://softwareengineering.stackexchange.com/questions/87330",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/19767/"
]
} |
87,438 | I have been programming for a very long time and I have in depth knowledge of several technologies. Recently I applied for a web development job and in my resume I had listed all the skills - HTML, CSS, JavaScript, jQuery, AJAX, PHP, ASP, JSP, C/C++, ARM. Except for C/C++ and ARM I had shown the skill level for all technologies as expert. Many of my friends had applied for the same job and they did not have any web development experience. ALL of them got a call for interview. However I got a rejection saying that we have received applications from very high level candidates and you have not be selected to go to the next level. This has seriously demotivated me. I do not understand why I have been rejected when I had all the required skills and all those who did not have any of the skills have been selected. One reason which I think is that the employer might be thinking that how one person can be an expert in all the technologies. Once in another interview I was told by the HR manager that it is unbelievable that you know ASP, JSP and PHP all in depth as we have different programmers for each of the technology. Such incidents make me very unhappy as in spite of being highly capable of the position I am rejected. Should I not list all my skills in the resume to avoid such situations? | The people who fine-tune their resumes to the job for which they are applying are the most successful at getting interviews. I've experienced this from both the applicant side and the reviewer side. If I'm hiring for a web developer position, I'm probably not going to be concerned about whether or not the applicant knows C++ or Objective C. It's also been my experience that applicants don't know things as well as they claim, so depending on the experience (in terms of job history) of the candidate, I take that information with a grain of salt. I think a lot of employers who have a tech background may be skeptical if an applicant says they are experts in a lot of different areas - even if it's true, there's likely to be a skepticism of the resume reviewer. The other thing to consider is that a person reviewing resumes may have hundreds to sift through, and its quite easy to over-filter resumes if it's not immediately evident that the applicant has the required skills. My advice: tweak each resume you send out to cover as many possible requirements of the job as you can, limit exposing technologies that are irrelevant to the job you're applying for, and if you don't have explicit experience with a particular requirement, be prepared to make the argument why your experiences translate well to that requirement. Good luck! | {
"source": [
"https://softwareengineering.stackexchange.com/questions/87438",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/29157/"
]
} |
87,457 | I was watching http://www.joelonsoftware.com/items/2011/06/27.html and laughed at Jon Skeet joke about 0.3 not being 0.3. I personally never had problems with floats/decimals/doubles but then I remember I learned 6502 very early and never needed floats in most of my programs. The only time I used it was for graphics and math where inaccurate numbers were ok and the output was for the screen and not to be stored (in a db, file) or dependent on. My question is, where are places were you typically use floats/decimals/double? So I know to watch out for these gotchas. With money I use longs and store values by the cent, for speed of an object in a game I add ints and divide (or bitshift) the value to know if I need to move a pixel or not. (I made object move in the 6502 days, we had no divide nor floats but had shifts). So I was mostly curious. | You use them when you're describing a continuous value rather than a discrete one. It's not any more complicated to describe than that. Just don't make the mistake of assuming any value with a decimal point is continuous. If it changes all at once in chunks, like adding a penny, it's discrete. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/87457",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/-1/"
]
} |
87,546 | I'm trying to get out of the Corporate game and go indie. I've always prided myself on being a jack of all trades so I think it suits me. If you're a freelancer or independent, what's the best advice you could give me as I start down this road? | Get everything in writing upfront. Never do anything for free . Sets a bad precedent for you and your peers. It destroys the local market. If a customer misses a payment, even one, stop work until they get current. Be professional and un-emotional but be firm. They are already into you for 30 days of work or more, don't dig a deeper hole. You aren't a bank, you are lending them money interest free at this point. Bill customers that miss payments, interest on the time the payment was late. Send as many invoices with LATE on them as you think you need to, don't be shy about the money. Get everything in writing upfront. If a potential customer won't agree to your terms, what makes you think they will be reliable and easy to work with on their terms. Be prepared to professionally walk away. Be willing to turn down work that won't be profitable. Or worse will cost you money or time being profitable. Never work on a break even project thinking you will make it up on the next the customer gives. You won't, you have set a precedent for them to expect to be able to low ball you. Get everything in writing upfront. Cheap customers are always cheap customers and will only get cheaper, more demanding and suck up all your time. Learn what a change request is, put this in your contract that they cost money and they push the schedule. Bill at least 25% more for change requests to make sure the client really needs them, just 1 or 2 change requests can sap all your profit off a single project. Learn to do Agile Methodologies, SCRUM in particular is a good way to manage customers, especially the ones that become difficult . Get everything in writing upfront. Never deliver anything sub-par, even if it is going to be late, crap on time is still crap. Crap gets you a worse reputation than late and quality. Your reputation is everything, it isn't what you know or do, it is what people say about you. Plan on networking at every user group meeting and the like to get the good paying jobs. Get everything in writing upfront. Get paid for every hour you work, don't be shy about the money, watch this video . Professional relationships are not you bending over backwards to please irrational customers with unrealistic expectations, they are about respect, your customer should see you as an expert and a professional, not a warm body filling a chair costing them money, don't take those jobs there is no profit in them. Breaking your own rules, even once sets a precedent to the customer that the other rules can be bent or broken, this leads to misery and loss of profits. Get everything in writing upfront. Fixed price jobs aren't the fixed price you will make, they are usually the amount that you will lose X 2. Spend more time learning about marketing and sales techniques and effective communication patterns that technology. As a consultant you should already be an expert in what you do. The other things you need to be an expert in now as well. Networking is important so you can delegate some things you might not be an expert in to a sub-contractor friend or associate, or at least lean on them for advice and education. You won't know everything but will be expected too. Charge enough for you time, your customers are not doing you a favor by having you work for them, you are doing them a favor by selling them your time and expertise. Low balling never helps you or your peers or the market. No matter how good the relationship with the customer is get everything in writing up front and never break this rule or do anything by word of mouth. Never work for friends, they won't be your friends anymore, especially not for free Never work for family either, see above. Never do anything free . | {
"source": [
"https://softwareengineering.stackexchange.com/questions/87546",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/5674/"
]
} |
87,611 | GMail has this feature where it will warn you if you try to send an email that it thinks might have an attachment. Because GMail detected the string see the attached in the email, but no actual attachment, it warns me with an OK / Cancel dialog when I click the Send button. We have a related problem on Stack Overflow. That is, when a user enters a post like this one : my problem is I need to change the database but I don't won't to create
a new connection. example:
DataSet dsMasterInfo = new DataSet();
Database db = DatabaseFactory.CreateDatabase("ConnectionString");
DbCommand dbCommand = db.GetStoredProcCommand("uspGetMasterName"); This user did not format their code as code! That is, they didn't indent by 4 spaces per Markdown, or use the code button (or the keyboard shortcut ctrl + k ) which does that for them. Thus, our system is accepting a lot of edits where people have to go in and manually format code for people that are somehow unable to figure this out. This leads to a lot of bellyaching . We've improved the editor help several times, but short of driving over to the user's house and pressing the correct buttons on their keyboard for them, we're at a loss to see what to do next. That's why we are considering a Google GMail style warning: Did you mean to post code? You wrote stuff that we think looks like code, but you didn't format it as code by indenting 4 spaces, using the toolbar code button or the ctrl + k code formatting command. However, presenting this warning requires us to detect the presence of what we think is unformatted code in a question . What is a simple, semi-reliable way of doing this? Per Markdown , code is always indented by 4 spaces or within backticks, so anything correctly formatted can be discarded from the check immediately. This is only a warning and it will only apply to low-reputation users asking their first questions (or providing their first answers), so some false positives are OK, so long as they are about 5% or less. Questions on Stack Overflow can be in any language, though we can realistically limit our check to, say, the "big ten" languages. Per the tags page that would be C#, Java, PHP, JavaScript, Objective-C, C, C++, Python, Ruby. Use the Stack Overflow creative commons data dump to audit your potential solution (or just pick a few questions in the top 10 tags on Stack Overflow) and see how it does. Pseudocode is fine, but we use c# if you want to be extra friendly. The simpler the better (so long as it works). KISS! If your solution requires us to attempt to compile posts in 10 different compilers, or an army of people to manually train a bayesian inference engine, that's ... not exactly what we had in mind. | A proper solution would probably be some learned/statistical model, but here are some fun ideas: Semi-colons at the end of a line . This alone would catch a whole bunch of languages. Parentheses directly following text with no space to separate it: myFunc() A dot or arrow between two words: foo.bar = ptr->val Presence of curly braces, brackets: while (true) { bar[i]; } Presence of "comment" syntax (/*, //, etc): /* multi-line comment */ Uncommon characters/operators: +, *, &, &&, |, ||, <, >, ==, !=, >=, <=, >>, <<, ::, __ Run your syntax highlighter on the text. If it ends up highlighting some high percentage of it, it's probably code. camelCase text in the post. nested parentheses, braces, and/or brackets. One could keep track of the number of times each of these appears, and these could be used as features in a machine-learning algorithm like perceptron , the way SpamAssassin does. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/87611",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/37/"
]
} |
87,696 | With plain Google as well as Google Code search tools it is easy to find how to program using some resource or solve certain problems (such as create a Java class, or an FTP block in Perl, etc.). So developers are tempted to just purely copy & paste the code (in a way re-use). Is this an incompetency? I have done this myself, though I think I am a better programmer than many others I have seen. Who has the time to RTFM ? In this age of information abundance, I do not think that copy & paste programming is bad. Isn't that what sites like Stack Overflow do anyway? People ask - OK, here is my problem - how to solve it? Now someone will post complete code and the person who asked the question would simply copy & paste the most voted answer. No matter how small the problem is. I am working with a bunch of young coders who heavily rely on the Internet to get their job done. I see convenience in copy/pasting and modifying code to get the job done. For example, you may be quite good with algorithms and such, but you may not know how to use a BufferedReader in Java - would you read complete the Javadoc for BufferedReader or look up some example of using it somewhere? What are the real dangers of copy & paste coding that can impact their competency? | Is it bad? Maybe... for learning small examples, for testing out a concept, it's not that bad. BUT ... you have to understand what you are copy/pasting. Otherwise, how do you know that the code is really doing? Sure, it gets the one result you want on the screen but maybe it has horrible performance, maybe it has security holes, maybe it causes memory leaks, maybe it summons Cthulhu, maybe it will cause customer credit card numbers to be leaked, maybe it contains a backdoor... And most likely, maybe it requires some tweaking to meet business requirements and if you don't understand the code you will not be able to properly tweak it (or better yet: write a more "correct" version)... As for "RTFM", yes, I do when it's available. I would read the BufferedReader javadocs, and if I can't get enough information to get my code working, I would then hit Google and search for "Java BufferedReader example". I would not expect the code I find to work immediately with my code, but I would expect to find a simple working stand-alone sample that I can use as an example to correct my own code. And when it's your own code that you are copying/pasting, that's usually a sign to start refactoring. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/87696",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/13058/"
]
} |
87,757 | My boss came to me today to ask me if we could implement a certain feature in 1.5 days. I had a look at it and told him that 2 to 3 days would be more realistic. He then asked me: "And what if we do it quick and dirty?" I asked him to explain what he meant with "quick and dirty". It turns out, he wants us to write code as quickly as humanly possible by (for example) copying bits and pieces from other projects, putting all code in the code-behind of the WebForms pages, stop caring about DRY and SOLID and assuming that the code and functionalities will never ever have to be modified or changed. What's even worse, he doesn't want us do it for just this one feature, but for all the code we write. We can make more profit when we do things quick and dirty. Clients don't want to pay for you taking into account that something might change in the future. The profits for us are in delivering code as quick as possible. As long as the application does what it needs to do, the quality of the code doesn't matter. They never see the code. I have tried to convince him that this is a bad way to think as the manager of a software company, but he just wouldn't listen to my arguments: Developer motivation: I explained that it is hard to keep developers motivated when they are constantly under pressure of unrealistic deadlines and budget to write sloppy code very quickly. Readability: When a project gets passed on to another developer, cleaner and better structured code will be easier to read and understand. Maintainability: It is easier, safer and less time consuming to adapt, extend or change well written code. Testability: It is usually easier to test and find bugs in clean code. My co-workers are as baffled as I am by my boss' standpoint, but we can't seem to get to him. He keeps on saying that by making things more quickly, we can sell more projects, ask a lower price for them while still making a bigger profit. And in the end these projects pay the developer's salaries. What more can I say to make him see he is wrong? I want to buy him copies of Peopleware and The Mythical Man-Month, but I have a feeling they won't change his mind either. A lot of you will probably say something like "Run! Get out of there now !" or "I'd quit!", but that's not really an option since .NET web development jobs are rather rare in the region where I live... Update Wow, I hadn't expected to get so many answers. Thank you all for your contributions and your opinions! As quite a few of the answers and comments point out, the type of company and the type of projects play a big role in this topic. I have explained a few things here there in comments on some answers, but it's probably better to add it here as well. The company I work for is rather small. We have 4 developers, 1 designer, 1 boss and 1 jack-of-all-non-technical-trades (the boss' wife). The projects we do can be divided into two categories: Smallish websites built with our own CMS or e-commerce framework (65%) Middle-sized web applications (35%) So while a lot of our projects are rather small, they are built on top of the same system. This system is about 4 years old and the code base is below par to say the least. It always is a dread to add new functionalities or modify standard functionalities for specific customers. One of the goals set by the boss is to start moving our focus to product development. So that means we'll be developing bigger applications that will serve as the base for other projects or are something SaaS-like. I totally agree that doing things quick and dirty can be the best solutions for certain projects. But when you are extending an existing CMS that will be used by all sites you will develop in the next few years or building a SaaS product from scratch, there are better approaches I think. | Sorry to say this but, You aren't going to want to hear this, but he is not completely wrong . If you are doing work for hire for external companies as a consultant,
and they are willing to accept the most slapped together thing you can
do and don't complain, and are willing to come back over and over
again for you to do more work, your boss is 100% correct when it comes to maximizing profits for your company. Then there is YAGNI : if the projects are one off projects that won't cost you anything to maintain or re-write and all that time in maintenance and re-writing is billable, doing it right the first time is actually costing you even more money. Then, your boss is 100% correct again. If your clients are not complaining about costs and lack of quality, then quality is not at issue to make your customers happy. Sounds like the customers are happy with crap so selling them more crap isn't a hard business decision. Anything you do above and beyond what the customer is happy with is wasted effort on every ones parts: they won't appreciate it. Your boss is 100% correct again. Remember quality is in the eye of the beholder. If it meets the customers' needs they don't care about the duct tape and coat hangers that are making it work. What you value greatly has little or no direct value to your customers since they don't care how the software does what it does, just that it does what they want mostly. Every piece of software eventually degenerates from entropy to a Big Ball of Mud . GUI applications, especially ones for Windows written in some flavor of VB entropy faster because of the culture of the tool set. If it makes you feel any better, you are just starting off a little closer to maximum entropy than other people. Personally I would never set a precedent with such low quality deliverables, but then again I would not go for the race to the bottom level of customers your company is apparently trying to cater to. Your management has decided these are the customers they want to have and there is no need to try to up sell the customer on more expensive higher quality software if they are fine with the way things are. You aren't going to get management to change, only your customers will do that. You can get better customers, or get a better job. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/87757",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/7289/"
]
} |
87,912 | To give light to the situation: I am currently one of two programmers working in a small startup software company. Part of my job requires me to learn a Web development framework that I am not currently familiar with. I get paid by the hour. So the question is: Is it wholly ethical to spend multiple hours of the day reading through documentation and tutorials and be paid for this time where I am not actively developing for our product? Or should the bulk of this learning be done at home, or otherwise off hours, to allow for more full-on development of our application during the work day? | If your employer wants you to spend your days learning the framework then great, it's both ethical and legal. I've done this in the past, both as a consultant (my consulting company paid) and as an employee. They do it because it makes you more useful. Win-win, assuming what you're learning is useful. If you were hired on the basis that you know it or will pick it up really quickly (it's a dialect of something you already know, for example), then it's tricky. I'd be inclined to ask the employer. If your employer is asking you to spend a lot of your own time learning something you were told they'd pay you to learn, then it's a question of how much you need the job and how useful the framework knowlege is. I don't think it's ethical for the employer to demand this of you, but you might have to do it if this is your only available work. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/87912",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/23415/"
]
} |
87,972 | I've heard many times about the aspect-oriented programming, mostly that it is the "next generation" technology in programming and is going to 'kill' OOP. Is it right? Is OOP going to die or what can be reason for that? | Any time someone tells you that one software technology will kill another one or dominate the whole market/use/audience, remember this : A sane (dynamic but stable) ecosystem is made of a variety of widely different species. That means that any new hyped technology will go through the hype curve and in the end will find it's specific(s) purpose through time and experience with it. That also mean that such an extreme concept as aspect-oriented programming is useful if it's needed, meaning, not always and not very often, because of implied costs. But it already have it's place, like OOProgramming, like generic programming, like functional programming, like procedural programming, etc. Did you notice that the languages that are the more used (and controversly popular) and widely spread in real life are "not pure"? That's because allowing several paradigms makes them more flexible to change of context through time and they fill more usage niches. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/87972",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/22586/"
]
} |
88,116 | I been contacted to do some work remote controlling LEDs displays over TCP/IP, but my experience and preparation is mostly about high-level programming language. I said that to the person who contact me about the work and he told me that: "if you call yourself a programmer you
should know all these things" Should a programmer really know the details of low-level programming? Or can I treat it as a black box concept, as theoretical knowledge but not necessarily doing it or implementing low level language solutions, having in mind that low-level programming is not my expertise? | Your contact does not know what they're talking about. There are many languages, methodologies, technologies, and so on that a single person cannot possibly know all of the necessary details very well. What you do have to know as a programmer is how to learn what you need to get the job done and have a problem-solving approach that you can apply to arrive at a solution, no matter what programming language you need to use. Admitting what you don't know is okay, but you can also prove that you are able to learn enough to achieve the desired result, no matter what you are faced with. Good programmers are simply good problem solvers that can implement their solutions in various programming languages. I would not be working for someone that has the attitude your contact does. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/88116",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/29524/"
]
} |
88,159 | I am a little experienced developer having approximately 5 years experience in PHP and somewhat less in Java, C# and trying to learn some Python nowadays. Since the start of my career as a programmer I have been told every now and then by fellow programmers that programming is suitable for a few early years of a career (most of them take it as 5 years) and that one must change the direction after it. The reason they present include headaches and pressures associated with programming. They also say that programmers are less social and don't usually like to give time to their families, etc. and especially "Oh come on, you can not do programming your entire life!" I am somewhat confused here and need to ask others about it. If I leave programming then what do I do?! I guess teaching may be a good option in this case, but it will require to first earn a PhD degree perhaps. It may also be noteworthy that in my country (Pakistan) the life of a programmer is not very good in that normally they must give 2-3 extra hours in the office to accomplish urgent programming tasks. I have a sense that situation is somewhat similar in other countries and regions as well. Do you think it is fair advice to change career from programming to something else after spending 5 years in this field? UPDATE Oh wow... I never knew people can have 40+ years of experience in this field. I am both excited and amazed seeing that people are doing it since 1971... That means 15 years before my birth! It is nice to be able to talk to such experienced people, we don't get such a chance here in Pakistan. Thanks again for all the help and sharing. It has been a nice experience getting your thoughts on this. | I don't think this is a question which can be given a blanket answer that is always correct, except perhaps for the age-old "It depends." The simplest advice is: if programming is what you love to do most, don't stop unless that changes. There are many other factors to be considered, such as job market, promotion opportunities, location, and of course salary, but the single most important thing with any career decision is the question "Will this make me happy ?" | {
"source": [
"https://softwareengineering.stackexchange.com/questions/88159",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/26831/"
]
} |
88,207 | I am tired of hearing people recommend that you should use only one thread per processor, while many programs use up to 100 per process!
take for example some common programs vb.net ide uses about 25 thread when not debugging
System uses about 100
chrome uses about 19
Avira uses more than about 50 Any time I post a thread related question, I am reminded almost every time that I should not use more that one thread per processor, and all the programs I mention above are ruining on my system with a single processor. | you should use only one thread per
processor, Possibly in HPC where you want maximum efifciency - but otherwise the stupidest thing I have heard today! You should use the number of threads that are appropriate for the design of the program and still give acceptable performance. For a web server it might be reasonable to fire a thread for each incoming connection (although there are better ways for very heavily loaded servers). For an ide each tool running in it's own thread isn't unreasonable. I suspect many of the threads reported for the .Net IDE are things like logging and I/O tasks being started in their own threads so they can continue unblocked. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/88207",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/23078/"
]
} |
88,329 | So I've been working at this job for a couple of months. I'm a little frustrated because I do my best work from 2 to 7. In previous jobs, I've come in at 9:30-10:00 and leave at 7. Some companies have been okay with this, others have not. But my current company insists on my being there at 8:30. Any deviation from this is a big deal. Is this typical? I have colleagues who are more 9:30 to 6:30, 10:00-7:00 guys...but maybe that is just startup culture? I don't see why, given that I don't meet clients, etc. what the advantage to having things be so rigid could be. I also don't see why if there is 15 to 20 minute variation sometimes in coming in, why people don't just assume that I will adjust when I leave... Are these unreasonable expectations as a developer or am I missing something? | But my current company insists on my being there at 8:30. Any deviation from this is a big deal. Is this typical? Yes it is typical. And companies like that tend to have very high turnover with developers. I was chatting with one of the project managers I used to work with (he's now a VP with some other company) and he was describing the policy at the company he was working at (at that time, one of the big satellite tv providers): starting time was 0830. The second time you are late (within a certain period), the door doesn't open when you swipe your access card, it instead calls your boss who has to come let you in. The third time (in that certain period), it contacts HR who fires you. He was commenting on the 200% turnover they had, and chuckling at the clueless other managers who created this policy. He also mentioned that he gave out his cellphone number to everyone under him, so that if they were late, he could get around the system to get them to work. Some managers are process oriented, and others are results oriented. You will quickly learn how to tell them apart. If you're smart, you'll figure out a way to ask in the interview some questions to determine one from the other without killing your interview. In a results-oriented company, what you get done is more important than how you look or what your hours are. These companies/bosses have the least impedance mismatch for developers. In those companies, when someone tries to say "waaah, q303 comes in late", a results-oriented boss will say "q303 gets his products shipped on time and under budget, what have you done lately?" Stars and heroes are very common in results-oriented companies. In a process-oriented company, how you get things done is more important. For a process-oriented boss, what time you arrive, what time you leave, and what cover sheet is stapled to your TPS report is extremely important. There is a huge impedance mismatch between the typical developer and this sort of manager. There are no favorites, nor stars, in a process-oriented company, and this is the sort who will deliberately fire anyone found to be indispensable. The perfect example of a process-oriented company is a fast food franchise - the goal is for every burger to be the same at every store in the country. If you make a better burger, you'll lose your franchise with them. Modern business schools teach managers that they do not need to understand a business (nor what their employees actually do) in order to be a manager. These folks will want you warming that seat at the appropriate time because that is something that they can measure - they don't know what you do, nor do they care to, scientific management says they don't. As you gather more experience in the working world, you'll find out that what is important to your boss is what you give them. You could cure cancer, balance the federal budget while juggling running chainsaws, but that doesn't matter because you come in late. They don't see you when you leave at 2am, because they leave "on time" (whatever that means). | {
"source": [
"https://softwareengineering.stackexchange.com/questions/88329",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/5057/"
]
} |
88,405 | From Wikipedia Extensible Markup Language (XML) is a set of rules for encoding documents in machine-readable form. It is defined in the XML 1.0 Specification[4] produced by the W3C, and several other related specifications, all gratis open standards.[5] What are the historical reasons for calling the shorthand XML rather then the more natural EML? | I took a very pleasant tour through W3C, Google, and Wikipedia and finally found the answer: an annotated XML spec where we find an excerpt of an email from the inventor of the name, James Clark, an email from chairman Jon Bosak who suggested to use the X letter, and some other ideas for names and the final votes: Votes | Acronym | Full Name
------+---------+--------------------
5 | XML | Extensible Markup Language
4 | MAGMA | Minimal Architecture for Generalized Markup Applications
3 | SLIM | Structured Language for Internet Markup
1 | MGML | Minimal Generalized Markup Language This is Jon Bosak's reply to James Clark's suggestion to name it "Extensible Markup Language" and gave birth to the acronym: In my opinion, the U-combinations won't fly, but if we allow "X" to stand for "extensible" , then I could live with (and even come to love) XML as an acronym for "extensible markup language", and I hereby now throw it into the list of current proposals . (Emphasis mine) Some bonus - from some old reports of the XML Special Interest Group that I found while looking for some original quote that could answer the question: M.15 Should the spec refer to XML as "The Extensible Markup Language" or as "Extensible Markup Language" without a definite article (e.g. in the first sentence)? The WG elected to give no guidance to the editors on this issue (in the full expectation that the result would depend on which editor touched the file last). Rationale: after several minutes' discussion and increasing hilarity, no consensus had been reached, but the end of the allotted time for the conference call had. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/88405",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/4642/"
]
} |
88,428 | I always wonder this, and perhaps I need a good history lesson on programming languages.
But since most compilers nowadays are made in C, how were the very first compilers made (AKA before C) or were all the languages just interpreted? With that being said, I still don't understand how even the first assembly language was done, I understand what assembly language is but I don't see how they got the VERY first assembly language working (like, how did they make the first commands (like mov R21 ) or w/e set to the binary equivalent? | Ha, I've done this. Many CPUs have simple, fixed-size instructions that are just a couple of bytes long. For a simple CPU like a Motorola 6800 for example, you could fit all of its instructions on a single sheet of paper . Each instruction would have a two-byte opcode associated with it, and arguments. You could hand-assemble a program by looking up each instruction's opcode. You'd then write your program on paper , annotating each instruction with its corresponding opcode. Once you had written out your program, you could could burn each opcode in sequence to an EPROM which would then store your program. Wire the EPROM up to the CPU with just the right instructions at the right addresses, and you have a simple working program. And to answer your next question, yes. It was painful (we did this in high school). But I have to say that wiring up every chip in an 8-bit computer and writing a program manually gave me a depth of understanding of computer architecture which I could probably not have achieved any other way. More advanced chips (like x86) are far more difficult to hand-code, because they often have variable-length instructions. VLIW/EPIC processors like the Itanium are close to impossible to hand-code efficiently because they deal in packets of instructions which are optimized and assembled by advanced compilers. For new architectures, programs are almost always written and assembled on another computer first, then loaded into the new architecture. In fact, for firms like Intel who actually build CPUs, they can run actual programs on architectures which don't exist yet by running them on simulators. But I digress... As for compilers, at their very simplest, they can be little more than "cut and paste" programs. You could write a very simple, non-optimizing, "high level language" that just clusters together simple assembly language instructions without a whole lot of effort. If you want a history of compilers and programming languages, I suggest you GOTO a history of FORTRAN . | {
"source": [
"https://softwareengineering.stackexchange.com/questions/88428",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/-1/"
]
} |
88,500 | I would like to switch from a 5 day week to a 4 day, but maintain a 40 hour working week. Would the 10 hour days affect your ability to be productive? I hate our public transit system so if I could reduce my transportation by 20% I would be happy. If other developers who work 10 hours shifts could be clear as the their experiences with it that would help me. I think my boss is flexible enough that he would be cool with it. | The literature on the subject points to the harm that long days (e.g., death marches) do. It is Impossible for humans to work productively for extended periods of time 1 , Unrealistic to expect people to work more than 2-6 hours in an 8-hour day 2 , and Detrimental to overall quality to force people to work longer hours 3 1 Nöteberg, Staffan. "Pomodoro Technique Illustrated". 2009. Pragmatic Programmers. pp 31-33 2 Brooks, Frederick. "The Mythical Man-Month". 1995. Addison-Wesley. pp 87-94. 3 DeMarco, Tom and Lister, Thomothy. "Peopleware: Productive Projects and Teams". 1999. Dorset House. Chapter 3-4 | {
"source": [
"https://softwareengineering.stackexchange.com/questions/88500",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/29832/"
]
} |
88,532 | I'm learning C++ and I'm using g++ on Linux for practicing. I want to know if people working as programmers use g++ -pedantic flag and also its importance in real world. What about other compilers, do they also allow this? Has this become some de-facto standard? I'm interested because I'm reading C++ Primer where the author points that it’s illegal to use non-const expression as dimension in array definition and g++ by default allows it. And there might be other things I'm unaware of. | Yes, absolutely do this. In fact, you need to study the manual page and turn on more warnings than -pedantic and -Wall will do. No, there's no standard. MSVC uses /W4 for example. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/88532",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/28458/"
]
} |
88,556 | I used to work on a really small outsourcing company (4 programmers and the boss), then when the stress and the frequent long shifts made the situation unbearable I made the switch to a better paid job with a more relaxed schedule that allows me some more free time. The problem, however, is that for the most part, everything is coded in Classic ASP that interfaces with a custom made C++ queueing system that stores everything in AS400 systems. My boss used to be one of the developers that made the initial efforts towards this, and naturally won't ever approve a switch to another languages / technologies despite the increasing difficulty that represents developing today business needs with yesterday tools. I'm pretty much stuck coding with Classic ASP in the foreseeable future, and I'm struggling to find ways to make it at least interesting, as I used to work with .NET and Java previously, and I feel like I'm going backwards... Any advice? | As others have pointed out, you should probably either try to change your boss' mind or find employment where you don't have to put up with such a backwards mentality. However , in the mean time, you could make your job a bit more interesting by trying to move whatever functionality you can client side and use async calls to trigger stuff to happen on the server. Think of this as an HTML/JavaScript front end with Webservices (implemented in classic ASP) on the back end. Developing a RESTful API could be an interesting challenge, there are some tools like JSON parsers for Classic ASP to move data back and forth in a more standard manner and client side templating would let you format data you get from your web services for nicer presentation. LinkedIn did something similar to unify different back-end technologies. Once you have a RESTful API, you could try to write some managed web services to emulate functionality of existing Classic ASP stuff. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/88556",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/29812/"
]
} |
88,645 | Looking at most (if not all) dynamic languages (e.g. Python, PHP, Perl and Ruby), they are all interpreted. Correct me if I'm wrong. Is there any example of dynamic language that goes through compilation phase? Is dynamic language identical with interpreted language? | Looking at most (if not all) dynamic languages [i.e Python, PHP, Perl and Ruby], they are all interpreted. Not true. You can compile Python source. That's one existential proof. There are interpreters for statically-typed languages, and compilers for dynamically-typed languages. The two concepts are orthogonal. Side note: In general, a language is just that: a language, with a set of syntactic constructs to express semantics. If you write Python on a whiteboard, it's still called Python! It's the implementation that can be an interpreter or a compiler. Being statically-typed or dynamically-typed (of kind of a hybrid of both) is a property of the language, while executing a program by interpreting or compilation is a property of the implementation. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/88645",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/9298/"
]
} |
88,685 | As far as I know and have understood in my experience with Qt, it's a very good and easy to learn library. It has a very well designed API and is cross-platform, and these are just two of the many features that make it attractive. I'm interested to know why more programmers don't use Qt. Is there a deficiency which speaks against it? Which feature makes other libraries better than Qt? Is the issue related to licensing? | I don't really intend this to be a bashing answer, but these are the reasons I do not personally use Qt. There are plenty of good things to say about it -- namely that the API works most of the time, and that it does seamlessly bridge platforms. But I do not use Qt, because: In some cases, it just doesn't look like native programs look. Designing a single UI for all platforms inherently is not going to look right when moved from machine to machine, for various visual styling reasons. For example, on Mac machines, split bars are usually relatively thick, and buttons are small and rounded with icons. On Windows machines, split bars are typically narrow, and buttons are more textual, with more square designs. Just because you can write one UI for every platform does not mean that you should for most applications. Qt is not just a link-able set of C++ libraries. The build system being used requires the translation of certain files into extra source files, which makes the build process much more complicated when compared with most other libraries. As a result of (2), C++ IDEs and tools can flag Qt expressions as errors, because they do not understand Qt's specifics. This almost forces use of QtCreator or a textual only editor like vim . Qt is a large amount of source, which must be present and preinstalled on any machine you use before compiling. This can make setting up a build environment much more tedious. Parts are mostly licensed under the LGPL, which makes it difficult to use single-binary-deployment when one needs to release under a more restrictive or less restrictive license. It produces extremely large compiled binaries when compared with similarly written "plain old native applications" (excepting of course applications written for KDE). | {
"source": [
"https://softwareengineering.stackexchange.com/questions/88685",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/22586/"
]
} |
88,707 | I've been having a discussion about licensing and open source software. Basically - the other guy is saying that licensing is easy, if you're going to build a product you can use an (any) open source project and make money by selling that code. My issue is that say I create a website or app with a project that uses a GPL license the restrictions aren't so straight forward - correct me if i'm wrong on each of these scenarios: 1 - i create an iPhone app using GPL code and put that app into the appstore - the code must be freely available to people buying that app. 2 - i create a website that my client hosts - they must have access to the code. 3 - i create a website as SaaS that my client "leases" but does not own - though it is hosted on their infrastructure - they must have access to that code Am i right on each of those assumptions? Are there any other issues i should be aware of under any other licensing terms for other licenses? | I don't really intend this to be a bashing answer, but these are the reasons I do not personally use Qt. There are plenty of good things to say about it -- namely that the API works most of the time, and that it does seamlessly bridge platforms. But I do not use Qt, because: In some cases, it just doesn't look like native programs look. Designing a single UI for all platforms inherently is not going to look right when moved from machine to machine, for various visual styling reasons. For example, on Mac machines, split bars are usually relatively thick, and buttons are small and rounded with icons. On Windows machines, split bars are typically narrow, and buttons are more textual, with more square designs. Just because you can write one UI for every platform does not mean that you should for most applications. Qt is not just a link-able set of C++ libraries. The build system being used requires the translation of certain files into extra source files, which makes the build process much more complicated when compared with most other libraries. As a result of (2), C++ IDEs and tools can flag Qt expressions as errors, because they do not understand Qt's specifics. This almost forces use of QtCreator or a textual only editor like vim . Qt is a large amount of source, which must be present and preinstalled on any machine you use before compiling. This can make setting up a build environment much more tedious. Parts are mostly licensed under the LGPL, which makes it difficult to use single-binary-deployment when one needs to release under a more restrictive or less restrictive license. It produces extremely large compiled binaries when compared with similarly written "plain old native applications" (excepting of course applications written for KDE). | {
"source": [
"https://softwareengineering.stackexchange.com/questions/88707",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/2570/"
]
} |
88,745 | I started programming in C++ at uni and loved it. In the next term we changed to VB6 and I hated it. I could not tell what was going on, you drag a button to a form and the ide writes the code for you. While I hated the way VB functioned I cannot argue that it was faster and easier than doing the same thing in C++ so i can see why it is a popular language. Now I am not calling VB developers lazy in just saying it easier than C++ and I have noticed that a lot of newer languages are following this trend such a C#. This leads me to think that as more business want quick results more people will program like this and sooner or later there will be no such thing as what we call programming now. Future programmers will tell the computer what they want and the compiler will write the program for them like in star trek. Is this just an under informed opinion of a junior programmer or are programmers getting lazier and less competent in general? EDIT: A lot of answers say why re invent the wheel and I agree with this but when there are wheels available people are not bothering to learn how to make the wheel. I can google how to do pretty much anything in any language and half the languages do so much for you when it come to debugging they have no idea what there code does of how to fix the error. That's how I cam up with the theory that programmers are becoming lazier and less competent as no one cares how stuff works just that it does until it does not. | No, developers haven't got lazier or less competent. Yes, there is a steadily decreasing need for actual development, in the sense that you know it. And yes, this is very much because businesses want quick results, and why shouldn't they? However, there is an end-point. There will always be a need for some developers. A lot of requirements are the same across different projects. The one you're talking about is UI code. Most UIs are made up of a specific set of fields - textbox, checkbox, radio, select, etc. - and there is really no point in developing these from scratch, over and over and over. So abstraction layers are put in to take away all of that boilerplate code. Likewise the data layer, which is usually nothing but Insert This, Delete This, Replace This and a large number of different views of the same data. Why keep writing that over and over? Let's invent ORMs. The only thing you should be developing is code that is unique to the business you're developing for. But there will always be that uniqueness - where there isn't, there is a business opportunity - and there will always be a need for people to write code. All that said, also bear in mind that there is a lot more to being a developer than writing code. Whether you are coding in pure assembly or knocking together Drupal components to make a content-driven site, you are translating the business need into something that the computer understands. The most important part of being a software developer is being able to understand the business requirement well enough to explain it to the computer. It doesn't really matter what language you're using to explain things to the computer, it only matters that you can. And this is hard work, nothing lazy about it. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/88745",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/22193/"
]
} |
89,073 | I am refactoring a PHP OOP legacy website. I am so tempted to start using 'final' on classes to " make it explicit that the class is currently not extended by anything ". This might save lots of time if I come to a class and I am wondering if I can rename/delete/modify a protected property or method. If I really want to extend a class I can just remove the final keyword to unlock it for extending. I.e If I come to a class that has no child classes I can record that knowledge by marking the class a final. The next time I come to it I would not have to re-search the codebase to see if it has children. Thus saving time during refactorings. It all seems like a sensible time saving idea.... but I have often read that classes should only be made 'final' on rare/special occasions. Maybe it screws up Mock object creation or has other side effects that I am not thinking of. What am I missing? | I have often read that classes should only be made 'final' on rare/special occasions. Whoever wrote that is wrong. Use final liberally, there’s nothing wrong with that. It documents that a class wasn’t designed with inheritance in mind, and this is usually true for all classes by default: designing a class that can be meaningfully inherited from takes more than just removing a final specifier; it takes a lot of care. So using final by default is by no means bad. In fact, amongst OOP experts it’s widely agreed that final should be the default, e.g. Jon Skeet : Classes should be sealed by default in C# Or Joshua Bloch : Design and document for inheritance or else prohibit it [ Effective Java , 3rd Ed, Item 19] Or Scott Meyers [ More Effective C++ , Item 33]. Which is why modern OO langauges such as Kotlin have final-by-default classes. You wrote: Maybe it screws up Mock object creation … And this is indeed a caveat, but you can always recourse to interfaces if you need to mock your classes. This is certainly superior to making all classes open to inheritance just for the purpose of mocking. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/89073",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/6720/"
]
} |
89,158 | It seems that conventional wisdom suggests that good programmers are also good at math. Or that the two are somehow intrinsically linked. Many programming books I have read provide many examples that are solutions to math problems, or are somehow related to math as if these examples are what make sense to most people. So the question I would like to float is: do you have to be good at math to be a good programmer? | I think it depends on what type of programming you want to do. As far as being a programmer in the business world goes, I would say that the answer is no. You can become a great programmer without knowing advanced mathematics. When you do end up having to deal with math, the formulas are usually defined in the business requirements so it only becomes a matter of implementing them in code. On the flip side, If you want to become a low-level programmer or say create 3D graphics engines, mathematics will play a huge role. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/89158",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/32514/"
]
} |
89,266 | I'm often asked at some point during the interview process to compare myself to my peers. For example, one of my first after-graduation jobs asked me to compare myself to my classmates. A job I recently interviewed for asked me to compare myself to my coworkers. I always play this down quite a bit. I'm always worried that, "I'm miles above everyone around me," sounds too arrogant. When push comes to shove though it is the truth. I graduated at the top of my class. I had a 3.99, the highest GPA of anyone else that year. My fellow students bitched and moaned about things like having to use the console to write "javac xxx.java" and build programs instead of just hitting the build button in VS. Most of them were utterly inept and I'd hate to see what happened to them in the real world. Others were miles above these people. There were like 3-5 of us that actually gave a damn, pursued our own education as if it mattered, and had whatever genes are necessary to think like a programmer or mathematician (the one guy I'd say was smarter than me was actually a math major--he graduated one year ahead of me or he would have taken my title). Even among these few "big hitters" I was one of, if not the best (some was due to more experience though). For about 90% of the other students though I see this not as me being so good, but them being really that f'n bad. I was often dumbfounded not just by their ignorance, but by their unwillingness to do what it took to loose it. My peers in college were lazy, bemoaning, irresponsible, sacks of stupidity that would rather run around puking from so much booze than put out the least amount of effort in learning anything. Then they blamed their ineptitude on the professors. As I entered the workforce I found that this trend continued. When I'm on the internet, talking to a worldwide populace of brilliant people I'm rather mediocre. I'm smart, excited, etc...I'm still very good but I'm much more able to see myself as a smaller fish in a larger ocean. Locally though, in personal real life experience....what I find easy others find hard even among what I'd call some of the best developers I've worked with. I know more about design, general development, and the specific language I use more than anyone else I know. Part of this is, I know full well, the kind of places I've learned in and where I've worked (who doesn't have the money to pay me what I'm worth). Still though, if I were to fairly compare myself to my coworkers, and in years past my co-students...don't I come off as more than a little arrogant? Others see me this way too though. It actually took me a while to recognize that there's actually something significantly special about the way I approach my programming (I really care), work ethic, and additionally my lucky roll in the gene game. I have seen it get to my head from time to time, and I try to avoid it, but in all honesty I'm just better than most. One thing that seems to differentiate me more than anything really is the fact that I continue to pursue greater knowledge at home, off hours. I'm one of the best because I want to be and it shows significantly. I've found that this is actually fairly rare in the real world, though many Internet people have me beat here as well. Knowing that there's certainly many more people like this out there, in fact I know of many people on SE that are much smarter than I am, how do you approach this question? Do you answer honestly? "I'm a fucking God that has do dumb down everything thing they do for the little people! The only way I can drag the rest along is by saying everything 20 times in 5 different ways." Or do you downplay yourself to make sure you don't come off as someone so damn arrogant they can't work with others? Edit: Yes, I make grammatical mistakes and additionally many more. I also suck at welding even though I tried very hard to get it. I also have a very hard time keeping my house plants alive. Some people are simply better at it. I'm simply better at programming. | You might say something like this (something I tried recently and worked relatively well). I see myself as a potential leader amongst my fellow co-workers. I am striving for this by offering advice on various programming tasks and leading the design and development of the projects I work on. An example of this is when I helped Bob the other day resolve a particulary complex problem. I offered him a number of methods he could use to resolve a problem he had been stuck with for a few days. Another example is during the recent team meeting of Project "Give 'em shit" I offered and lead the discussion in design by suggesting we use the Repository pattern for our database interaction. When the team were unsure of the benefits of this or how it works I provided a detailed informal training session into the benefits and uses of this design pattern and where it helps resolve requirement. Throughout the day, Tim will often come and ask my advice on how to fix a problem he is experiencing, or Jane who was asked to look into the latest microsft web design methodologies and didn't know where to start. I helped her by suggesting she look at the MVC architecture and ASP .NET web forms as starting points. I am constantly trying to improve my skills so that I can help progress my own development, push my boundaries and be able to relay that back to the team in healthy technical discussions and through the work I contribute. End. Being the "smartest", best programmer, or knowing the best about cutting edge technology is sometimes not the primary trait a company is looking for. You need to find out what they cherish most, and while continuing on what you are doing learning wise etc aim to become to the attention of your superiors on those areas. They might be looking for communication, teamwork or customer interaction which is something I value just as highly in an employee. And try not to do so to the detriment of your relationship with your colleagues. The workplace can just be like the grown up version of the school class room. Just as brutal if you find yourself on the outside. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/89266",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/9293/"
]
} |
89,273 | OO programming literature is full of design patterns. Most books on object oriented programming dedicate a chapter or two to design patterns like factories and decorators. So what are the equivalent patterns in functional languages and why hasn't anyone written a book about them yet? Is there something special about functional languages that obviates the need for design patterns? | OO and functional programming are two very different programming paradigms, and design patterns (DP) is a significant part of OO design and programing. DP do not have such role in functional programming. One could even say, that DP are not needed in functional programming -- there is no itch which DP is cure for. One could argue that design patterns is a sign of missing features in a programming language. Peter Norvig found that 16 out of the 23 patterns in the Design Patterns book are " either invisible or simple r" in Lisp or Dylan. "Many patterns imply object-orientation or more generally mutable state, and so may not be as applicable in functional programming languages, in which data is immutable or treated as such." -- http://en.wikipedia.org/wiki/Design_pattern_%28computer_science%29 | {
"source": [
"https://softwareengineering.stackexchange.com/questions/89273",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/-1/"
]
} |
89,453 | I am just finishing my masters degree (in computing) and applying for jobs. I've noticed many companies specifically ask for an understanding of object orientation. Popular interview questions are about inheritance, polymorphism, accessors etc. Is OO really that crucial? I even had an interview for a programming job in C and half the interview was OO. In the real world, developing real applications, is object orientation nearly always used? Are key features like polymorphism used A LOT? I think my question comes from one of my weaknesses. Although I know about OO, I don't seem to be able to incorporate it a great deal into my programs. | OOP is a paradigm that allow your program to grow without becoming impossible to maintain/understand. This is a point that students almost never get because they just do little projects lasting from two weeks period to two months at the most. This short period is not enough to make objective of OOP clear, especially if people on the project are beginners. But sticking to some modellisation is crucial for big projects, I would say >50,000 lines of code. OOP isn't the only solution to that, but this is the most broadly used in the industry. This is why people want you to know OOP. I would add, by exprience, that almost all junior programmers have serious flaw in modellisation and OOP. Most of them know how to write classes, inherit from them and basic stuff like that, but they do not think in "OOP" and end up misusing it. This is why any serious recruiter will always look what your competencies are in the OOP domain. As these things are not learned at school, there is a simply enormous variation of knowledge between different candidates. And let's be honnest: I don't think someone whih poor knowledge in OOP could work on any big project, simply because it would require more time for lead devs to manage these people than simply writing the code themself. If you don't think "OOP" yet, I would suggest you to read some books about it and apply in company that does not have really big projects; to get used to OOP keeping doing useful work for your employer (and as long as he/she is giving you your salary, this will be usefull for you too). EDIT: ha, and I would add that I already wrote OOP code in C, even if it's not the most common usage of C, this is possible with strong knowledge. You just have to build vtables manually. And behind OOP technique, something is hidden: software design. Software design, is really helpful, in C as in any other languages. Many recruiters will test your software design competencies, and OOP question are good for that, but OOP isn't the main thing that is being tested here. This is why you have those questions even for a C job. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/89453",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/30177/"
]
} |
89,553 | Some people swear by closing their PHP files with ?> , some say it's more optimized to leave it off. I know that it's not essential to have it on there, I'm just wondering what the pros and cons are of doing this, and what best practice is. | It's not so much a matter of performance - parsing the trailing ?> is trivial and won't make any noticable difference at all, unless you're including a million files per second. IIRC, php.net recommends NOT adding the ?> , and the reasons go something like this: it's unnecessary it is easy to accidentally add significant whitespace after the ?> , which will be output to the client, which in turn can lead to obscure 'headers already sent' errors (this happens when an included file contains whitespace, and you try to set a header after including that file) | {
"source": [
"https://softwareengineering.stackexchange.com/questions/89553",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/29187/"
]
} |
89,620 | When I write code I always try to make my code as clean and readable as possible. Every now and then there comes a time when you need to cross the line and go from nice clean code to slightly uglier code to make it faster. When is it OK to cross that line? | You cross the line when You have measured that your code is too slow for its intended use . You have tried alternative improvements that don't require mucking up the code. Here's a real-world example: an experimental system I am running was producing data too slowly, taking over 9 hours per run and using only 40% of CPU. Rather than mess up the code too much, I moved all the temporary files to an in-memory filesystem. Added 8 new lines of non-ugly code, and now CPU utilization is above 98%. Problem solved; no ugliness required. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/89620",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/21900/"
]
} |
89,741 | I inherited an existing code base for a product that is reprehensibly sloppy. The fundamental design is woefully inadequate which unfortunately I can do little about without a complete refactor (HIGH coupling, LOW cohesion, rampant duplication of code, no technical design documentation, integration tests instead of unit tests). The product has a history, high exposure to critical "cash-cow" clients with minimal tolerance for risk, technical debt that will make the Greeks blush, VERY large codebase and complexity, and a battle-weary defeatist approach to bugs by the team before me. The old team jumped ship to another division so that they have the opportunity to ruin another project. It is very rare that I experience a Technical Incompetency Project Failure as opposed to a Project Management Failure but this is indeed one of those cases. For the moment I am by myself but I have a lot of time, freedom of decision and future direction and the ability to build a team from scratch to help me. My question is to gather opinion on low impact refactoring on a project like this when you have some free time during the functional requirements gathering phase. There are thousands of compiler warnings, almost all of them unused imports, unread local variables, absence of type checking and unsafe casts. Code formatting is so unreadable and sloppy that it looks like the coder suffered from Parkinsons disease and couldn't control the amount of times the space bar was pressed on any given line. Further database and file resources are typically opened and never closed safely. Pointless method arguments, duplicate methods that do the same thing, etc.. While I am waiting for requirements for the next feature I have been cleaning low-impact low risk things as I go and wondered if I am wasting my time or doing the right thing. What if the new feature means ripping out code that I spent time on earlier? I am going to start an Agile approach and I understand this is acceptable and normal to constantly refactor during Agile development. Can you think of any positive or negative impacts of me doing this that you would like to add? | Firstly I'd like to point out that unit tests are not a replacement for integration tests. The two need to exist side-by-side. Be grateful that you have integration tests, otherwise one of your small refactorings could well make one of the low-tolerance cash cows go ballistic on you. I would start to work on the compiler warnings and unused variables and imports. Get a clean build first. Then start to write unit tests to document the current behaviour, then start the real refactoring. I can't really see any negative impact. You will gain a lot of deep understanding of the code base, which will help with bigger refactorings. It is almost always preferable to refactor than to rewrite, since during refactoring your still have a working product, whereas during the rewrite you don't. And in the end the sales of the product have to pay your salary. Once the requirements are starting to come in, I would use what I call the spotlight approach. Do the usual agile thing (prioritize, then cut off a small slab for an iteration, and work through that) and leave quite a bit of time for code improvements. Then refactor where you are working anyway. Over time this will cover wide areas of the code base without you ever having to venture into areas where you would have difficulty justifying to management why you are working on that part of the code. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/89741",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/25476/"
]
} |
89,761 | I know it seems odd to say, but a fellow programmer at work deliberately used a couple of bad programming practices on purpose! I'll explain. First let me say that he's an intelligent guy and for the most part he writes intelligible code. He was asked to implement licensing on a web application project written in Java. Since it's Java, if one really wanted to, one could probably hack open the jars and read the names of the classes and methods written inside. His solution to this problem was to quite literally to awkwardly call variables and methods less-than-obvious names and plant them inside already congested classes rather than generating new classes. His justification was that if a hacker wanted to switch out certain classes in order to bypass licensing checks (and therefore get a free copy of the product), he'd have a far more difficult time of it if it weren't obvious which methods perform these particular tasks. Only after he had done it did I confront him about it, suggesting that we could perhaps buy some sort of obfuscator library to do it for us, while maintaining good programming practices. He claims to not have had the time or resources to search for that kind of solution. ..Which leaves me at a dilemma. Do I look for a obfuscator library in Java and fix his old code (which might be a little touchy about remodeling his code), or do I leave it as it, as much as that irks me to no end? | Security through obfuscation is never good security. There must be better ways of protecting your intellectual property. And that is what you and your colleague should bring up as a joint concern with your manager. If management then decide that they don't want to spend the time or money on improved security, then both of you will have to live with that decision (it's not your product, it's the company's product) and better not spend (waste?) any more time on the subject. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/89761",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/6967/"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.