source_id
int64
1
4.64M
question
stringlengths
0
28.4k
response
stringlengths
0
28.8k
metadata
dict
122,014
I've seen the book Working Effectively with Legacy Code recommended a few times. What are the key points of this book? Is there much more to dealing with legacy code than adding unit/integration tests and then refactoring?
The key problem with legacy code is that it has no tests. So you need to add some (and then more...). This in itself would take a lot of work, as @mattnz noted. But the special problem of legacy code is that it was never designed to be testable . So typically it is a huge convoluted mess of spaghetti code, where it is very difficult or downright impossible to isolate small parts to be unit tested. So before unit testing, you need to refactor the code to make it more testable. However, in order to refactor safely, you must have unit tests to verify that you haven't broken anything with your changes... This is the catch 22 of legacy code. The book teaches you how to break out of this catch by making the absolute minimal, safest changes to the code just to enable the first unit tests. These aren't meant to make the design nicer - only to enable unit tests. In fact, sometimes they do make the design uglier or more complex. However, they allow you to write tests - and once you have unit tests in place, you are free to make the design better. There are lots of tricks to make the code testable - some are sort of obvious, some are not at all. There are methods I would have never thought about myself, without reading the book. But what is even more important is that Feathers explains what precisely makes a code unit testable. You need to cut dependencies and introduce barriers into your code, but for two distinct reasons: sensing - in order to check and verify the effects of executing a piece of code, and separation - in order to get the specific piece of code into a test harness first of all. Cutting dependencies safely can be tricky. Introducing interfaces, mocks and Dependency Injection is clean and nice as a goal, just not necessarily safe to do at this point. So sometimes we have to resort to subclassing the class under test in order to override some method which would normally e.g. start a direct request to a DB. Other times, we might even need to replace a dependency class/jar with a fake one in the test environment... To me, the most important concept brought in by Feathers is seams . A seam is a place in the code where you can change the behaviour of your program without modifying the code itself . Building seams into your code enables separating the piece of code under test, but it also enables you to sense the behaviour of the code under test even when it is difficult or impossible to do directly (e.g. because the call makes changes in another object or subsystem, whose state is not possible to query directly from within the test method). This knowledge allows you to notice the seeds of testability in the nastiest heap of code, and find the minimal, least disruptive, safest changes to get there. In other words, to avoid making "obvious" refactorings which have a risk of breaking the code without you noticing - because you don't yet have the unit tests to detect that.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/122014", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/5402/" ] }
122,126
Throughout my career I had worked at companies that had a collection of different environments for different purposes. We always had more or less our desktop environment, a test environment, a QA environment, a staging environment and a production environment. This went for both servers/applications and any data sources we were using. When I started at my current company I found that 90% of the apps were either developed on a desktop environment against production data sources or developed directly on the production server depending on the platform. This wasn't particularly surprising, as I was hired in part to make changes to improve the way the development team functioned, which was clear from my interview process. We slowly started to turn the philosophy and pretty soon, most of the apps could be run in either a desktop, test or production environment. Not too long after that staging came around as well. Now most of our developers see the benefit of this methodology and defend it vigilantly. However, we have a number of legacy apps that never got migrated. We also have a number of legacy programmers who think of this as a waste of time. Unfortunately, we got lip service but never full buy-in from management. We got what we thought was a commitment to invest substantially in this about a year ago, but nothing materialized despite the considerable planning that we put into it. Now we are finding that we need more and more environments. We need help from the server/network administration teams for setup and we need participation from the business stakeholders to support the release cycle. We are at a place now where a project can function what reasonable developers would consider "normally" only if you have the right people on the project and the time to set up the proper environments. I'd love to present a complete argument, but management really has no time and interest in hearing me out until there is a critical issue. I can't really articulate the benefits simply as it always just seemed second nature to me. I was wondering if there are any good, simple, irrefutable reasons for the separation of environments that would get managers lacking development experience to support this idea? . Are there any good resources/literature on the topic?
The answer: Money I don't care what the actual reason is. Money MUST be at the root of all of your reasoning, especially when dealing with management. If we both sat in a room for 2 hours, we could come up with dozens of reasons why it is better to have multiple environments. Here's the problem: If the reasons are not based on money, then none of them matter . Programmers are not hired to be smart. They're not hired to be creative. They're hired to increase revenue -- either by earning money or saving money. If you're not doing either one of those, you'd better get your resume together. When looking at it from that standpoint, the answer is simple: Having only one environment increases our downtime and results in lost revenue. Multiple environments allows us to protect our profits by giving our users a front-end that is just as reliable and dependable as our company. Repeat it every day. There are some great comments below that add some real value to this answer, so I'll mention them: Karl Bielefeldt had a great point when he mentioned that Cost/Benefit analysis is an important factor. An economist might refer to it as the opportunity cost of pursuing multiple environments. While it may be surprising to hear, there are scenarios where multiple environments may not be the answer! If the website of your company is a very minor addition, then unexpected downtime may actually be the more cost effective way of doing business. This doesn't sound like the position you are in, but it is worth mentioning. BlairHippo had a good point in that you should feel free to make it seem like a catastrophe (and if you lose your data, it is!). Liability is a great tool for persuading managers, but still for the same reason--lawsuits are expensive. Avoiding them saves money. As an addendum, I found this article to be quite good. It doesn't directly answer your question, but enables you to recognize how programmers are viewed to management, which in turn, leads to this answer. Good read.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/122126", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/39757/" ] }
122,150
I work on a small team of devs, 4 guys. They have all used source control. Most of them can't stand source control and instead choose not to use it. I strongly believe source control is a necessary part of professional development. Several issues make it very difficult to convince them to use source control: The team is not used to using TFS . I've had 2 training sessions, but was only allotted 1 hour which is insufficient. Team members directly modify code on the server. This keeps code out of sync. Requiring comparison just to be sure you are working with the latest code. And complex merge problems arise Time estimates offered by developers exclude time required to fix any of these problems. So, if I say nono it will take 10x longer...I have to constantly explain these issues and risk myself because now management may perceive me as "slow". The physical files on the server differ in unknown ways over ~100 files. Merging requires knowledge of the project at hand and, therefore, developer cooperation which I am not able to obtain. Other projects are falling out of sync. Developers continue to have a distrust of source control and therefore compound the issue by not using source control. Developers argue that using source control is wasteful because merging is error prone and difficult. This is a difficult point to argue, because when source control is being so badly mis-used and source control continually bypassed, it is error prone indeed. Therefore, the evidence "speaks for itself" in their view. Developers argue that directly modifying server code, bypassing TFS saves time. This is also difficult to argue. Because the merge required to synchronize the code to start with is time consuming. Multiply this by the 10+ projects we manage. Permanent files are often stored in the same directory as the web project. So publishing (full publish) erases these files that are not in source control. This also drives distrust for source control. Because "publishing breaks the project". Fixing this (moving stored files out of the solution subfolders) takes a great deal of time and debugging as these locations are not set in web.config and often exist across multiple code points. So, the culture persists itself. Bad practice begets more bad practice. Bad solutions drive new hacks to "fix" much deeper, much more time consuming problems. Servers, hard drive space are extremely difficult to come by. Yet, user expectations are rising. What can be done in this situation?
It's not a training issue, it's a human factors issue. They do not want to, and are creating road blocks. Deal with the broken group dynamics, what is the root cause of their objection - usually fear, is it just fear of change, or is it more sinister. No professional developer today, or for the last 20 years, has resisted source control. Once, about 30 or 40 years ago, when computers were slow, source control even slower and programs consisted of one 500 line file, it was a pain and there were valid reasons not to use it. These objections can only be coming from the worst kind of cowboys I can think of. Is the system forced on them making their lives difficult in some way? Find out what it is, and change the system to invalidate the objection. Repeat until done. I suggest looking at GIT or Mercurial. You can create a repository in each source code tree, they won't even notice and can keep on hacking the way they do now. You can track the changes they have hacked into the code base, make commits, merge them into other source trees etc. When they see you do a merge of a weeks worth of effort into another product in seconds, they might change their ideas. When you roll back one of their screw ups with one command, and save their ass, they might even thank you (don't count on it). At worst, you look good in front of the boss and, for a bonus, make them look like the cowboys they are. Merging would take not only a great knowledge of the project In that case, I am afraid you're up the proverbial creek with no paddle. If merging is not an option, neither is implementing it, so you are saying that you can no longer add features you already have in one branch (term used loosely) to another. If I were you I would reconsider my career prospects...
{ "source": [ "https://softwareengineering.stackexchange.com/questions/122150", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/11107/" ] }
122,191
Given how much simpler jQuery development is, when compared to native JavaScript, what makes people forgo libraries like jQuery altogether? Is this because jQuery has limitations or it is slow? I mean, if jQuery is so easy compared to native javascript, what reasons do people have to still use pure javascript?
Let's talk about cars. Oh wait, we already did - remember that time we met, some time ago? We talked about cars. In fact, you seemed to be quite the expert on cars. You were able to explain, in detail, all of what's right, wrong, and exciting about the latest Formula 1 race. You knew by heart all of Lamborghini's models, including their price and availability. You even had thoughts of purchasing your own Ferrari 599 GTB Fiorano and were saving up for it (I bet the steak dinner didn't help much). While explaining the faults of Toyota in a great, excited voice, you suddenly jumped from your chair and screamed into the air, waving your fists about: "Damn it all, I'm a magnificent expert on all things related to cars! I'm going to be a car mechanic!" And so you went. You had an interview, the Boss Man was just as impressed as I with your knowledge, and you were hired. The first client came in. His clutch was broken. You inspected it and didn't know what to do. As a matter of fact, you had absolutely no idea how to follow the advice the Boss Man gave you. You were fired. But how could that be!? You know everything about cars! Except for ... everything about cars. You can very well know your dream car has a V12 engine, but you don't know what that actually means. So you're not a car mechanic, really - you're a car enthusiast. And until you learn how cars work , you will remain an enthusiast. Now let me ask you. How does $.fn.text work? And what about $.fn ? What do they really mean? How does $(something) return a gigantic thingy containing things, and what is that thingy exactly? Can you replicate their functionality, at least a bit, in theory even? Can you cope without jQuery? Saying that "native JavaScript is hard" is just ... false. First and foremost, because JavaScript as a language has nothing to do with the DOM , which is mainly what jQuery abstracts. Second because once you learn a bit about the DOM, you can already cruise through the most common cross-browser bugs. But just a little secret - everything is hard at first. Long division was a bitch in 5th grade. As a second analogy for this answer: jQuery is to JavaScript-DOM (not JavaScript the language, just the DOM) like Array.prototype.forEach is to for . It works, for 99% of the cases. And it works well. But for that 1% which isn't covered, you need to know how to use the for loop, if only to be practical. This entire answer is based on the "purer" side of the question, and not even the technical side (the library's size, for example, and several other things as explained in Michael Dorrant's answer). Because I love JavaScript and when people seem to just throw it aside casually saying "pah, those silly javascriptians" and waving fancy white gloves, it gets down to morality. If you can accept the fact that you'll always be a JavaScript enthusiast, then who am I to stop you? But if you want to be a JavaScript programmer, you first have to have the knowledge to at least choose between using jQuery (or any other library) and not using a library. Learn the DOM. Learn how to use it. Write your own small library or just some collection of helper functions. And once you are knowledgeable of the DOM, and you choose to use jQuery - godspeed. Laziness is awarded for those who worked hard.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/122191", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/53281/" ] }
122,205
I often heard the claim that dynamically typed languages are more productive than statically typed languages. What are the reasons for this claim? Isn't it just tooling with modern concepts like convention over configuration, the use of functional programming, advanced programming models and use of consistent abstractions - all of which could be used in statically typed languages as well? Admittedly there is less clutter because the (for instance in Java) often redundant type declarations are not needed, but you can also omit most type declarations in statically typed languages that use type inference, without losing the other advantages of static typing. And all of this is available for modern statically typed languages like Scala as well. So: what is there to say for productivity with dynamic typing that really is an advantage of the type model itself? Clarification: I'm more interested in big / medium sized projects than in quick hacks. :-)
I actually think it's a pretty close call. Both dynamic typing and static typing have their advantages. Reasons for dynamic typing being more productive: It's more concise - A lot of extraneous boilerplate code can be removed if everything is dynamically typed - type declarations, typecasting logic etc. All other things being equal, shorter code is marginally quicker to write, but more importantly it can be quicker to read and maintain (since you don't need to wade through many pages of code to get a grip on what is happening) Easier to "hack" techniques such as duck typing and monkey patching can get you results very quickly (although might confuse you later on...) More interactive - dynamic typing is arguably more suitable for interactive, REPL-like programming for rapid prototyping, real-time debugging of running program instances or even live coding. Test cases can catch the runtime errors - assuming you are using TDD or at the very least have a good test suite, this should pick up any typing issues in your code. Better polymorphism - dynamic languages are potentially more likely to encourage the creation of polymorphic functions and abstractions, which can boost productivity and code re-use. Clojure for example makes great use of dynamic polymorphism in its many abstractions . Prototypes - prototype based data / object models are in my view more powerful and flexible than statically typed inheritance heirarchies. Dynamic languages are more likely to allow or encourage a prototype-based approach, Javascript being a great example. Reasons for static typing being more productive: Better design - being forced to think about the types of values in your software up front can push you towards cleaner, more logical solutions. (I say can - it's still possible to design really bad code...) Better compile time checking - static typing can enable more errors to be caught at compile time. This is a huge advantage, and is arguably the best thing about statically typed languages overall. Auto-completion - static typing can also give more information to the IDE so that auto-completion of code or documentation lookup is more effective. Discourages hacks - you have to keep type discipline in your code, which is likely to be an advantage for long term maintainability. Type inference - in some languages (e.g. Scala) this can get you many of the conciseness benefits of dynamic languages will still maintaining type discipline. On average my conclusion (after many years of experience on both sides of the fence) is that dynamic typing can be more productive in the short term, but it ultimately becomes difficult to maintain unless you have very good test suites and testing discipline. On the other hand, I actually prefer statically typed approaches overall, because I think the correctness benefits and tool support give you better productivity in the long term.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/122205", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/24912/" ] }
122,206
There are more than a few questions that hop into mind when someone thinks about Google's indexing services. Jeff Atwood wrote about them at The Elephant in the Room: Google Monoculture and Trouble In the House of Google . I have two questions: How does google index dynamic websites? This site has dynamic pages, QUESTIONS , TAGS , USERS , BADGES , UNANSWERED , ASK QUESTION . The content of these pages is dynamically generated, therefore we access the dynamic content and not the physical files on the server. But how does Google shows every question of the site or other dynamic websites? What does Google index and keep on its servers? Does it copy the complete page into its server or just the title, meta tags and body?
I actually think it's a pretty close call. Both dynamic typing and static typing have their advantages. Reasons for dynamic typing being more productive: It's more concise - A lot of extraneous boilerplate code can be removed if everything is dynamically typed - type declarations, typecasting logic etc. All other things being equal, shorter code is marginally quicker to write, but more importantly it can be quicker to read and maintain (since you don't need to wade through many pages of code to get a grip on what is happening) Easier to "hack" techniques such as duck typing and monkey patching can get you results very quickly (although might confuse you later on...) More interactive - dynamic typing is arguably more suitable for interactive, REPL-like programming for rapid prototyping, real-time debugging of running program instances or even live coding. Test cases can catch the runtime errors - assuming you are using TDD or at the very least have a good test suite, this should pick up any typing issues in your code. Better polymorphism - dynamic languages are potentially more likely to encourage the creation of polymorphic functions and abstractions, which can boost productivity and code re-use. Clojure for example makes great use of dynamic polymorphism in its many abstractions . Prototypes - prototype based data / object models are in my view more powerful and flexible than statically typed inheritance heirarchies. Dynamic languages are more likely to allow or encourage a prototype-based approach, Javascript being a great example. Reasons for static typing being more productive: Better design - being forced to think about the types of values in your software up front can push you towards cleaner, more logical solutions. (I say can - it's still possible to design really bad code...) Better compile time checking - static typing can enable more errors to be caught at compile time. This is a huge advantage, and is arguably the best thing about statically typed languages overall. Auto-completion - static typing can also give more information to the IDE so that auto-completion of code or documentation lookup is more effective. Discourages hacks - you have to keep type discipline in your code, which is likely to be an advantage for long term maintainability. Type inference - in some languages (e.g. Scala) this can get you many of the conciseness benefits of dynamic languages will still maintaining type discipline. On average my conclusion (after many years of experience on both sides of the fence) is that dynamic typing can be more productive in the short term, but it ultimately becomes difficult to maintain unless you have very good test suites and testing discipline. On the other hand, I actually prefer statically typed approaches overall, because I think the correctness benefits and tool support give you better productivity in the long term.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/122206", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/32803/" ] }
122,209
I am writing a program to simulate the activity of ants in a grid (PDF). The ant can move around, pick up things and drop things. The problem is while the action of the ants and the positions of each ant can be tracked by class attributes easily (and we can easily create many instances of such ants) my client said that since he has a background in functional programming he would like the simulation to be made using functional programming. To be clear, the original words from the client is "no class" only, but not "functional programming". So I assume he doesn't mean functional programming and I can do it imperatively. Additionally, I have no prior experience in functional programming. However, I think it's beneficial to focus on this question being particularly about a functional programming requirement than just simply "do it imperatively." How would you handle this situation? Would you try to persuade your client that using object-oriented programming is much cleaner, try to follow what he required and give him poor-quality code, or do something else?
Object-oriented code is not by definition cleaner, and conversely non-OO code is not by definition crappy. While there does seem to be a rather obvious object-oriented mapping to this particular problem, I would suggest that you try the functional programming approach anyway. Give it your best shot, try to solve the problem in the best functional programming style that you can muster, and you might just learn something you didn't expect.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/122209", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/15309/" ] }
122,233
Example: foobar = new InputStreamReader(p.getInputStream(), "ISO-8859-1"); Since the encoding is hardcoded and correct, the constructor will never throw the UnsupportedEncodingException declared in the specification (unless the java implementation is broken, in which case I'm lost anyway). Anyway, Java forces me to deal with that exception anyway. Currently, it looks like that try { foobar = new InputStreamReader(p.getInputStream(), "ISO-8859-1"); } catch(UnsupportedEncodingException e) { /* won't ever happen */ } Any ideas how to make it better?
My habit is, just to be on the safe side, to put an assert into the catch block. Someone might change the contents of the try block later, and you do want to know if the code fails don't you?
{ "source": [ "https://softwareengineering.stackexchange.com/questions/122233", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/6617/" ] }
122,347
I started a programming blog earlier this year, and since I started it some things have changed. Some changes are due to technology changing, some changes are due to my code libraries improving, and some (ok, probably most) are due to me changing as I learn more. I want to go back and completely re-write certain blog posts. Is it better to rewrite posts to remove old information and update them with new stuff, or to create entirely new posts and possibly take down old ones? I'm not talking about small changes to the code, or an extra few sentences, but complete rewrites with new code, new information, etc. Some things to consider are comments on the post, subscribers who receive updates when new posts are created, and user bookmarks.
I think the best approach is to leave the old entries/code samples where they are, and to add new ones with your new ideas/code samples. Then you can include a link on the older posts pointing to the new ones. This approach allows both you and your visitors to see how your code/ideas evolved over time, which could be valuable. And there's nothing wrong with admitting the stuff you did/wrote a while ago was not as good as it could have been. The very fact that you are recognizing that is a sign of your progress.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/122347", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/1130/" ] }
122,372
Is browser fingerprinting a sufficient method for uniquely identifying anonymous users? What if you incorporate biometric data like mouse gestures or typing patterns? The other day I ran into the Panopticlick experiment EFF is running on browser fingerprints . Of course I immediately thought of the privacy repercussions and how it could be used for evil. But on the other hand, this could be used for great good and, at the very least, it's a tempting problem to work on. While researching the topic I found a few companies using browser fingerprinting to attack fraud. And after sending out a few emails I can confirm at least one major dating site is using browser fingerprinting as but one mechanism to detect fake accounts. (Note: They have found it's not unique enough to act as an identity when scaling up to millions of users. But, my programmer brain doesn't want to believe them). Here is one company using browser fingerprints for fraud detection and prevention: http://www.bluecava.com/ Here is a pretty comprehensive list of stuff you can use as unique identifiers in a browser: http://browserspy.dk/
First, I don't think it's realistic to expect users to have JavaScript disabled on the modern web. So let's take a look at what Panopticlick can gather through JavaScript alone, along with the uniqueness score of my particular browser: User Agent (1 in 4,184) HTTP_ACCEPT Headers (1 in 14) Browser Plugin Details (1 in 1.8 million) Time Zone (1 in 24) Screen Size and Color Depth (1 in 1,700) System Fonts (1 in 11) Cookies Enabled? (1 in 1.3) Limited SuperCookie test (1 in 2) The standouts for uniqueness are clearly User Agent and Browser Plugins. Remember that these items are used together to form a browser fingerprint, so they are more than as strong as the individual scores. The cumulative uniqueness here is: 4,184 x 14 x 1.8 million x 24 x 1,700 x 11 x 1.3 x 2 aka a REALLY BIG NUMBER . That's ... pretty unique. I have Flash disabled at the moment, with "click to activate". Enabling Flash adds: System Fonts (1 in 374k) Flash provides the second most unique detectable element, but given the enormous number even the default JavaScript detection in Panopticlick produces, I'm not sure Flash is necessary for this sort of browser fingerprinting to work. Just JavaScript being enabled is enough. Browser fingerprinting is merely a part of the story, though. Consider the sum of what all we can detect from anonymous users, because it can all work together to fingerprint anonymous users. How difficult is it to gather and use the detected data? Browser detail sniffing, as shown above (easy) IP Address, which has a known level of reliability with pros and cons (easy) User behavior patterns such as usage (time of day), typing, mouse or finger movements, word use (hard, some server side, some client side) One thing I worry about with browser sniffing alone is how trivially easy it is for users to switch browsers. There are at least four great and free browser alternatives on most platforms: Chrome, Opera, Firefox, Safari. So to break the browser sniffing, or at least interrupt it, you could switch browsers frequently. It's worth mentioning so-called SuperCookies here since they can actually work, in some cases, even if you switch browsers and even if JavaScript, HTML 5 Local Storage, and Flash are disabled . A privacy researcher has revealed the evil genius behind a for-profit web analytics service capable of following users across more than 500 sites, even when all cookie storage was disabled and sites were viewed using a browser's privacy mode. (If you're curious, the TL;DR version is that they do this by exploiting obscure principles of the ETag header .) Anyway, getting back to browser sniffing -- there are two somewhat inconvenient things users can do to defeat this: Constantly switch browsers. Always browse with JavaScript and Flash disabled. However, if the user doesn't know that their browser settings are being sniffed and used as part of the method to fingerprint them, I highly doubt they would necessarily go to the trouble of doing these two things. It's work. Based on the above data, I believe browser sniffing can help identify the typical anonymous internet user -- but it is only effective in combination with the other things we typically detect from anonymous internet users like IP Address.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/122372", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/41787/" ] }
122,440
Say you have a document with an essay written. You want to parse this essay to only select certain words. Cool. Is using a regular expression faster than parsing the file line by line and word by word looking for a match? If so, how does it work? How can you go faster than looking at each word?
How does it work? Take a look at automata theory In short, each regular expression has an equivalent finite automaton and can be compiled and optimized to a finite automaton. The involved algorithms can be found in many compiler books. These algorithms are used by unix programs like awk and grep. However, most modern programming languages (Perl, Python, Ruby, Java (and JVM based languages), C#) do not use this approach. They use a recursive backtracking approach, which compiles a regular expression into a tree or a sequence of constructs representing various sub-chunks of the regular expression. Most modern "regular expression" syntaxes offer backreferences which are outside the group of regular languages (they have no representation in finite automata), which are trivially implementable in recursive backtracking approach. The optimization does usually yield a more efficient state machine. For example: consider aaaab|aaaac|aaaad, a normal programmer can get the simple but less efficient search implementation (comparing three strings separately) right in ten minutes; but realizing it is equivalent to aaaa[bcd], a better search can be done by searching first four 'a' then test the 5th character against [b,c,d]. The process of optimization was one of my compiler home work many years ago so I assume it is also in most modern regular expression engines. On the other hand, state machines do have some advantage when they are accepting strings because they use more space compared to a "trivial implementation". Consider a program to un-escape quotation on SQL strings, that is: 1) starts and ends with single quotation marks; 2) single quotation marks are escaped by two consecutive single quotations. So: input ['a'''] should yield output [a']. With a state machine, the consecutive single quotation marks are handled by two states. These two states serve the purpose of remembering the input history such that each input character is processed exactly only once, as the following illustrated: ... S1->'->S2 S1->*->S1, output *, * can be any other character S2->'->S1, output ' S2->*->END, end the current string So, in my opinion, regular expression may be slower in some trivial cases, but usually faster than a manually crafted search algorithm, given the fact that the optimization cannot be reliably done by human. (Even in trivial cases like searching a string, a smart engine can recognize the single path in the state map and reduce that part to a simple string comparison and avoid managing states.) A particular engine from a framework/library may be slow because the engine does a bunch of other things a programmer usually don't need. Example: the Regex class in .NET create a bunch of objects including Match, Groups and Captures.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/122440", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/38464/" ] }
122,477
I have some computer science students in a compulsory introductory programming course who see a programming language as a set of magic spells, which must be cast in order to achieve some effect (instead of seeing it as a flexible medium for expressing their idea of solution). They tend to copy-paste code from previous, similar-looking assignments without considering the essence of the problem. Are there some exercises or analogies to make these students more confident that they can, and should, understand the structure and meaning of each piece of code they write?
You could present them a series of exercises, each one building on the previous while adding some extra element or twist to the problem, or investigating the issue from a different perspective, which reveals a weakness of the previous solution, requiring a new, different approach. This forces them to think about, analyse, modify and experiment with each solution, instead of just copy-pasting a ready-made piece of code. Another possibility - although not strictly a programming task - is to ask them to estimate various things. E.g. how much water flows through the Mississippi delta per second? Such questions don't have a set answer, especially because one needs to make certain assumptions to get to a convincing (range of) value(s). And - although answers to many of these "classic" ones can be googled indeed - you can easily make up new ones which aren't (yet) found anywhere on the net. Examples to both of these kinds of exercises can be found in e.g. Programming Pearls by Jon Bentley. Also The Pragmatic Programmer has some good challenges. A third kind of task would be to present them some piece of code with (one or more) bugs in it, which they must find and fix. This again forces them to use their analytical skills and reason about how the program is actually working. Update Feedback from a comment by Billy ONeal: The problem with the "series of exercises" is that students who have a problem with an earlier exercise are completely screwed for remaining exercises. You are right, although I feel this is more about the general problem of setting course difficulty to the right level / grouping students of similar skill level together. Moreover, one can arrange the students into smaller groups where they are required to discuss and debate about the problems and solutions, and solve the problems together. If someone doesn't get it, the others can help (this setup would improve teamwork skills too). And if someone tries to be lazy and let the others do all the work, it is surely noticed by the teacher (who is supposed to be walking around, supervising and mentoring the students, not playing WoW on his laptop in the corner ;-) And one can also adjust the exercises to accommodate students with different skill levels. Beginners can go slower, experienced ones faster.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/122477", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/557/" ] }
122,482
Recently I've been working on refactoring parts of the code base I'm currently dealing with - not only to understand it better myself, but also to make it easier for others who are working on the code. I tend to lean on the side of thinking that self-documenting code is nice . I just think it's cleaner and if the code speaks for itself, well... That's great . On the other hand we have documentation such as javadocs. I like this as well, but there's a certain risk that comments here gets outdated (as well as comments in general of course). However, if they are up-to-date they can be extremely useful of say, understanding a complex algorithm. What are the best practices for this? Where do you draw the line between self-documenting code and javadocs?
Self-documenting code (and in-code comments) and Javadoc comments have two very different target audiences. The code and comments that remain in the code file are for developers. You want to address their concerns here - make it easy to understand what the code does and why it is the way it is. The use of appropriate variable names, methods, classes, and so on (self-documenting code) coupled with comments achieves this. Javadoc comments are typically for users of the API. These are also developers, but they don't care about the system's internal structure, just the classes, methods, inputs, and outputs of the system. The code is contained within a black box. These comments should be used to explain how to do certain tasks, what the expected results of operations are, when exceptions are thrown, and what input values mean. Given a Javadoc-generated set of documentation, I should fully understand how to use your interfaces without ever looking at a line of your code.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/122482", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/19888/" ] }
122,485
This is a minor niggle, but every time I have to code something like this, the repetition bothers me, but I'm not sure that any of the solutions aren't worse. if(FileExists(file)) { contents = OpenFile(file); // <-- prevents inclusion in if if(SomeTest(contents)) { DoSomething(contents); } else { DefaultAction(); } } else { DefaultAction(); } Is there a name for this kind of logic? Am I a tad too OCD? I'm open to evil code suggestions, if only for curiosity's sake...
Extract it to separate function (method) and use return statement: if(FileExists(file)) { contents = OpenFile(file); // <-- prevents inclusion in if if(SomeTest(contents)) { DoSomething(contents); return; } } DefaultAction(); Or, maybe better, separate getting contents and its processing: contents_t get_contents(name_t file) { if(!FileExists(file)) return null; contents = OpenFile(file); if(!SomeTest(contents)) // like IsContentsValid return null; return contents; } ... contents = get_contents(file) contents ? DoSomething(contents) : DefaultAction(); Upd: Why not exceptions, why OpenFile doesn't throw IO exception: I think that it's really generic question, rather than question about file IO. Names like FileExists , OpenFile can be confusing, but if to replace them with Foo , Bar , etc, - it would be clearer that DefaultAction may be called as often as DoSomething , so it may be non-exceptional case. Péter Török wrote about this at end of his answer Why there is ternary conditional operator in 2nd variant: If there would be [C++] tag, I'd wrote if statement with declaration of contents in its condition part: if(contents_t contents = get_contents(file)) DoSomething(contents); else DefaultAction(); But for other (C-like) languages, if(contents) ...; else ...; is exactly the same as expression statement with ternary conditional operator, but longer. Because the main part of the code was get_contents function, I just used the shorter version (and also omitted contents type). Anyway, it's beyond this question.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/122485", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/4285/" ] }
122,569
I have just completed my Master's degree in Computer Science and have gotten my first job interview as a developer. I do not have much experience in large scale development projects, but I am hoping my university education counts for something. I am wondering, what materials should I bring that would impress my interviewers? What do most interviewers expect, especially from a new graduate? **Edit: The job interview went OK, except I forgot my pants. Thanks for all the great advice!
A notepad and pen are good, but bring some humility and enthusiasm - that will impress the interviewer the most ;-) And pants - remember to wear pants.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/122569", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/-1/" ] }
122,608
As a C & Objective-C programmer, I'm a bit paranoid with the compiler warning flags. I usually try to find a complete list of warning flags for the compiler I use, and turn most of them on, unless I have a really good reason not to turn it on. I personally think this may actually improve coding skills, as well as potential code portability, prevent some issues, as it forces you to be aware of every little detail, potential implementation and architecture issues, and so on... It's also in my opinion a good every day learning tool, even if you're an experienced programmer. For the subjective part of this question, I'm interested in hearing other developers (mainly C, Objective-C and C++) about this topic. Do you actually care about stuff like pedantic warnings, etc? And if yes or no, why? Now about Objective-C, I recently completely switched to the LLVM toolchain (with Clang), instead of GCC. On my production code, I usually set this warning flags (explicitly, even if some of them may be covered by -Wall): -Wall -Wbad-function-cast -Wcast-align -Wconversion -Wdeclaration-after-statement -Wdeprecated-implementations -Wextra -Wfloat-equal -Wformat=2 -Wformat-nonliteral -Wfour-char-constants -Wimplicit-atomic-properties -Wmissing-braces -Wmissing-declarations -Wmissing-field-initializers -Wmissing-format-attribute -Wmissing-noreturn -Wmissing-prototypes -Wnested-externs -Wnewline-eof -Wold-style-definition -Woverlength-strings -Wparentheses -Wpointer-arith -Wredundant-decls -Wreturn-type -Wsequence-point -Wshadow -Wshorten-64-to-32 -Wsign-compare -Wsign-conversion -Wstrict-prototypes -Wstrict-selector-match -Wswitch -Wswitch-default -Wswitch-enum -Wundeclared-selector -Wuninitialized -Wunknown-pragmas -Wunreachable-code -Wunused-function -Wunused-label -Wunused-parameter -Wunused-value -Wunused-variable -Wwrite-strings I'm interested in hearing what other developers have to say about this. For instance, do you think I missed a particular flag for Clang (Objective-C), and why? Or do you think a particular flag is not useful (or not wanted at all), and why? EDIT To clarify the question, note that -Wall only provides a few basic warnings. They are actually a lot more warning flags, not covered by -Wall , hence the question, and the list I provide.
For context, I'm a Clang developer working at Google. At Google, we've rolled Clang's diagnostics out to (essentially) all of our C++ developers, and we treat Clang's warnings as errors as well. As both a Clang developer and one of the larger users of Clang's diagnostics I'll try to shed some light on these flags and how they can be used. Note that everything I'm describing is generically applicable to Clang, and not specific to C, C++, or Objective-C. TL;DR Version: Please use -Wall and -Werror at a minimum on any new code you are developing. We (the compiler developers) add warnings here for good reasons: they find bugs. If you find a warning that catches bugs for you, turn it on as well. Try -Wextra for a bunch of good candidates here. If one of them is too noisy for you to use profitably, file a bug . If you write code that contains an "obvious" bug but the compiler didn't warn about it, file a bug. Now for the long version. First some background on warning flag groupings. There are a lot of "groupings" of warnings in Clang (and to a limited extent in GCC). Some that are relevant to this discussion: On-by-default: These warnings are always on unless you explicitly disable them. -Wall : These are warnings that the developers have high confidence in both their value and a low false-positive rate. -Wextra : These are warnings that are believed to be valuable and sound (i.e., they aren't buggy), but they may have high false-positive rates or common philosophical objections. -Weverything : This is an insane group that literally enables every warning in Clang. Don't use this on your code. It is intended strictly for Clang developers or for exploring what warnings exist . There are two primary criteria mentioned above which guide where warnings go in Clang, and let's clarify what these really mean. The first is the potential value of a particular occurrence of the warning. This is the expected benefit to the user (developer) when the warning fires and correctly identifies an issue with the code. The second criteria is the idea of false-positive reports. These are situations where the warning fires on code, but the potential problem being cited does not in fact occur due to the context or some other constraint of the program. The code warned about is actually behaving correctly. These are especially bad when the warning was never intended to fire on that code pattern. Instead, it is a deficiency in the warning's implementation that causes it to fire there. For Clang warnings, the value is required to be in terms of correctness , not in terms of style, taste, or coding conventions. This limits the set of warnings available, precluding oft-requested warnings such as warning whenever {} s are not used around the body of an if statement. Clang is also very intolerant of false-positives . Unlike most other compilers it will use an incredible variety of information sources to prune false positives including the exact spelling of the construct, presence or absence of extra '()', casts, or even preprocessor macros! Now let's take some real-world example warnings from Clang, and look at how they are categorized. First, a default-on warning: % nl x.cc 1 class C { const int x; }; % clang -fsyntax-only x.cc x.cc:1:7: warning: class 'C' does not declare any constructor to initialize its non-modifiable members class C { const int x; }; ^ x.cc:1:21: note: const member 'x' will never be initialized class C { const int x; }; ^ 1 warning generated. Here no flag was required to get this warning. The rationale is that this is code is never really correct, giving the warning high value , and the warning only fires on code that Clang can prove falls into this bucket, giving it a zero false-positive rate. % nl x2.cc 1 int f(int x_) { 2 int x = x; 3 return x; 4 } % clang -fsyntax-only -Wall x2.cc x2.cc:2:11: warning: variable 'x' is uninitialized when used within its own initialization [-Wuninitialized] int x = x; ~ ^ 1 warning generated. Clang requires the -Wall flag for this warning. The reason is that there is a non-trivial amount of code out there which has used (for good or ill) the code pattern we are warning about to intentionally produce an uninitialized value. Philosophically, I see no point in this, but many others disagree and the reality of this difference in opinion is what drives the warning under the -Wall flag. It still has very high value and a very low false-positive rate, but on some codebases it is a non-starter. % nl x3.cc 1 void g(int x); 2 void f(int arr[], unsigned int size) { 3 for (int i = 0; i < size; ++i) 4 g(arr[i]); 5 } % clang -fsyntax-only -Wextra x3.cc x3.cc:3:21: warning: comparison of integers of different signs: 'int' and 'unsigned int' [-Wsign-compare] for (int i = 0; i < size; ++i) ~ ^ ~~~~ 1 warning generated. This warning requires the -Wextra flag. The reason is that there are very large codebases where mis-matched sign on comparisons is extremely common. While this warning does find some bugs, the probability of the code being a bug when the user writes it is fairly low on average. The result is an extremely high false-positive rate. However, when there is a bug in a program due to the strange promotion rules, it is often extremely subtle making this warning when it flags a bug have relatively high value . As a consequence, Clang provides it and exposes it under a flag. Typically, warnings don't live long outside of the -Wextra flag. Clang tries very hard to not implement warnings which do not see regular use and testing. The additional warnings turned on by -Weverything are usually warnings under active development or with active bugs. Either they will be fixed and placed under appropriate flags, or they should be removed. Now that we have an understanding of how these things work with Clang, let's try to get back to the original question: what warnings should you turn on for your development? The answer is, unfortunately, that it depends. Consider the following questions to help determine what warnings work best for your situation. Do you have control over all of your code, or is some of it external? What are your goals? Catching bugs, or writing better code? What is your false-positive tolerance? Are you willing to write extra code to silence warnings on a regular basis? First and foremost, if you don't control the code, don't try turning extra warnings on there. Be prepared to turn some off. There is a lot of bad code in the world, and you may not be able to fix all of it. That is OK. Work to find a way to focus your efforts on the code you control. Next, figure out what you want out of your warnings. This is different for different people. Clang will try to warn without any options on egregious bugs, or code patterns for which we have long historical precedent indicating the bug rate is extremely high. By enabling -Wall you're going to get a much more aggressive set of warnings targeted at catching the most common mistakes that Clang developers have observed in C++ code. But with both of these the false-positive rate should remain quite low. Finally, if you're perfectly willing to silence false-positive s at every turn, go for -Wextra . File bugs if you notice warnings which are catching a lot of real bugs, but which have silly or pointless false positives. We're constantly working to find ways to bring more and more of the bug-finding logic present in -Wextra into -Wall where we can avoid the false-positives. Many will find that none of these options is just-right for them. At Google, we've turned some warnings in -Wall off due to a lot of existing code that violated the warning. We've also turned some warnings on explicitly, even though they aren't enabled by -Wall , because they have a particularly high value to us. Your mileage will vary, but will likely vary in similar ways. It can often be much better to enable a few key warnings rather than all of -Wextra . I would encourage everyone to turn on -Wall for any non-legacy code. For new code, the warnings here are almost always valuable, and really make the experience of developing code better. Conversely, I would encourage everyone to not enable flags beyond -Wextra . If you find a Clang warning that -Wextra doesn't include but which proves at all valuable to you, simply file a bug and we can likely put it under -Wextra . Whether you explicitly enable some subset of the warnings in -Wextra will depend heavily on your code, your coding style, and whether maintaining that list is easier than fixing everything uncovered by -Wextra . Of the OP's list of warnings (which included both -Wall and -Wextra ) only the following warnings are not covered by those two groups (or turned on by default). The first group emphasize why over-reliance on explicit warning flags can be bad: none of these are even implemented in Clang! They're accepted on the command line only for GCC compatibility. -Wbad-function-cast -Wdeclaration-after-statement -Wmissing-format-attribute -Wmissing-noreturn -Wnested-externs -Wnewline-eof -Wold-style-definition -Wredundant-decls -Wsequence-point -Wstrict-prototypes -Wswitch-default The next bucket of unnecessary warnings in the original list are ones which are redundant with others in that list: -Wformat-nonliteral -- Subset of -Wformat=2 -Wshorten-64-to-32 -- Subset of -Wconversion -Wsign-conversion -- Subset of -Wconversion There are also a selection of warnings which are more categorically different. These deal with language dialect variants rather than with buggy or non-buggy code. With the exception of -Wwrite-strings , these all are warnings for language extensions provided by Clang. Whether Clang warns about their use depends on the prevalence of the extension. Clang aims for GCC compatibility, and so in many cases it eases that with implicit language extensions that are in wide use. -Wwrite-strings , as commented on the OP, is a compatibility flag from GCC that actually changes the program semantics. I deeply regret this flag, but we have to support it due to the legacy it has now. -Wfour-char-constants -Wpointer-arith -Wwrite-strings The remaining options which are actually enabling potentially interesting warnings are these: -Wcast-align -Wconversion -Wfloat-equal -Wformat=2 -Wimplicit-atomic-properties -Wmissing-declarations -Wmissing-prototypes -Woverlength-strings -Wshadow -Wstrict-selector-match -Wundeclared-selector -Wunreachable-code The reason that these aren't in -Wall or -Wextra isn't always clear. For many of these, they are actually based on GCC warnings ( -Wconversion , -Wshadow , etc.) and as such Clang tries to mimic GCC's behavior. We're slowly breaking some of these down into more fine-grain and useful warnings. Those then have a higher probability of making it into one of the top-level warning groups. That said, to pick on one warning, -Wconversion is so broad that it will likely remain its own "top level" category for the foreseeable future. Some other warnings which GCC has but which have low value and high false-positive rates may be relegated to a similar no-man's-land. Other reasons why these aren't in one of the larger buckets include simple bugs, very significant false-positive problems, and in-development warnings. I'm going to look into filing bugs for the ones I can identify. They should all eventually migrate into a proper large bucket flag or be removed from Clang. I hope this clarifies the warning situation with Clang and provides some insight for those trying to pick a set of warnings for their use, or their company's use.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/122608", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/4204/" ] }
122,710
I'm trying to refactor my application into MVC, but I'm stuck on the M part. In a database-backed app, the model is implemented in the app code, right? But then, what is in the database -- is that not also the model? (I'm not using the database as a simple object store -- the data in the DB is an enterprise asset).
Yea, both the model in the code and database are the "Model". The model has to do with what your application "IS", and the controller is what it "does". Any code dealing with direct persistence to the database is considered the Model. Note: MVC is a pattern , so don't over-think it. It's easy to get all super into doing MVC the right way, but at the end of the day, it's just a mindset! It means keep your business logic out of the database and UI - that's it. Before MVC, people would put business logic all up in their webpages when it should be on the server, or they would have a bunch of scripts firing in the database doing business logic right along with the persistence code. MVC was brought about to get people to start thinking in a way that helps make their code reusable, so don't get caught up in the details too much.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/122710", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/-1/" ] }
122,740
Velocity Template Language 's set directive requires a variable on the left hand side. This doesn't work. #set ( $entries.add("d") ) Even though I have no use for the return value of add("d") , I have to assign it to a variable. #set ( $x = $entries.add("d") ) I want to communicate that the variable is necessary, but I have no intention of using it later. Is there a convention for naming variables that only exist to appease the compiler?
I tend to use dummy for this kind of situation (a variable that I must have though I don't need to use).
{ "source": [ "https://softwareengineering.stackexchange.com/questions/122740", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/4049/" ] }
122,788
So my professor was giving back some feedback on a project I've been working on. He docked a few marks for this code: if (comboVendor.SelectedIndex == 0) { createVendor cv = new createVendor(); cv.ShowDialog(); loadVendors(); } This is in a combobox "index changed" handler. It's used when the user wants to create a new vendor, my top option (index 0, that never changes) opens the "Create a new Vendor" dialog. So the content of my combo box ends up looking like this: Create New Vendor... Existing Vendor Existing Vendor 2 Existing Vendor 3 His problem is with the first line code: if (comboVendor.SelectedIndex == 0) He claims that the 0 should be a constant, and actually docked me marks because of that. He claims I shouldn't use literals in my code at all. The thing is, I don't understand why I would want to make that code in that situation a constant. That index will never change, nor is it something that you would need to tweak. It seems like a waste of memory to keep a single 0 in memory that's used for a very specific situation and never changes.
The actual correct way of doing this in C# is to not rely on the ordering of the ComboItems at all . public partial class MyForm : Form { private readonly object VENDOR_NEW = new object(); public MyForm() { InitializeComponents(); comboVendor.Items.Insert(0, VENDOR_NEW); } private void comboVendor_Format(object sender, ListControlConvertEventArgs e) { e.Value = (e.ListItem == VENDOR_NEW ? "Create New Vendor" : e.ListItem); } private void comboVendor_SelectedIndexChanged(object sender, EventArgs e) { if(comboVendor.SelectedItem == VENDOR_NEW) { //Special logic for selecting "create new vendor" } else { //Usual logic } } }
{ "source": [ "https://softwareengineering.stackexchange.com/questions/122788", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/41989/" ] }
123,023
I heard about Domain Driven Development from a developer in the area. He talked it up like it was just about the silver bullet to changing requirements. I read the wiki . Still not too clear. What is "3D" in practical terms? Is it really that amazing that now UML class diagramming is just obsolete?
Well, first of all, I don't think that the Wikipedia article you refer to is very good, mostly because it references a bunch of things that are only ancillary to Domain Driven Design and does little to enlighten anyone about the practice. But, as someone who has taken Domain Driven Design to heart (which usually goes by DDD, rather than 3D, for what it's worth), I always felt the fundamentals of DDD are obvious, if you read so much as the first chapter of Eric Evans' book. But it is a set of patterns and practices, so it's not that easy to give a 3 sentence summary of what it is and what the advantages are without going into some detail. Which details resonate with any one person might be very different, too; it's probable that 10 years ago I wouldn't have seen the point at all, myself. DDD is not a silver bullet. When done sensibly, it's about taking a craftsman-like approach to building software, and recognizing the need to reduce the cognitive friction between development teams and the businesses they are building software for. One of the most important practices is to have a layer in which the domain vocabulary used by the software team and the business team matches as closely as possible. You build this layer iteratively as you come to understand the business problem that you are trying to solve. When business logic is sensibly encoded in this layer, isolated from all the convoluted dependencies that enterprise applications typically have by factoring interactions with those systems out to interfaces, the language used in the actual domain layer eventually becomes fairly concise, obvious, and readable. When you review your model with the business, they will often correct you, and you gradually refactor to a deeper understanding of the domain. Considering the shape that I've seen most enterprise software in, in practice, DDD may sound like a silver bullet, because most enterprise software has such poor separation of concerns that it's nearly untestable, and the software team lives in great fear of change because they have no idea what the side effects of ostensibly even trivial code changes might be, whereas a properly factored domain layer will be independently testable and verifiable. But actually, DDD acknowledges that systems rarely exist in isolation. DDD includes coping patterns for legacy systems (Anti-corruption layer, bounded contexts, to name a couple). If you practice object-oriented design, including the discipline of loose coupling, and you practice unit testing fairly religiously, and you mercilessly refactor code, and you work with domain experts while building your system, essentially you'll end up with a result that's basically what advocates of domain driven design are talking about. There are a few specific patterns described in Evans's book that apply mostly to enterprise software development, and some that are fairly universal principles, but essentially, DDD is a pragmatic approach to software development that can, over time, reduce the buildup of technical debt, and make your customers happier because you are able to speak the same language with each other, and deliver better-working solutions because of the advantages of understanding each other better.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/123023", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/11107/" ] }
123,071
I was reading Code Complete and in the chapter on layout and style, he was predicting that code editors would use some sort of rich text formatting. That means, instead of code looking like this Procedure ResolveCollisions { Performs a posteriori collision resolution through spatial partitioning algoritm } ( CurrentMap : SpriteContext, PotentialColliders: SpriteList ) var Collider : Sprite, Collidee : Sprite, Collision : SpriteCollision begin DoStuff(); end. it could look something like this: Procedure ResolveCollisions Performs a posteriori collision resolution through spatial partitioning algorithm Parameters CurrentMap : SpriteContext PotentialColliders : SpriteList Local Variables Collider : Sprite Collidee : Sprite Collision : SpriteCollision DoStuff(); I've seen syntax coloring and highlighting and even parentheses coloring, but nothing that looked like this in actual code. I was wondering if this sort of thing actually ever existed, or perhaps if it was decided that it didn't have enough benefit or that it was an entirely bad idea. Have any of you seen richly-formatted code like this before, or know if the idea was ever considered and eventually rejected?
There is no technical reason that you couldn't. If text editors can do syntax highlighting, they could just as easily change other aspects of the display to highlight code. However, it's one thing to have whatever is being typed change colors as the editor figures out what you are typing. Having the text suddenly change sizes and jump around while you are typing would get really obnoxious. However, for a 'static' code display, you could easily beautify source code. For example take any halfway decent source->html converter, and add whatever font sizes and styles you like to the stylesheets, and you'll have rich formatted code.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/123071", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/21698/" ] }
123,074
When I was originally learning about SQL I was always told, only use triggers if you really need to and opt to use stored procedures instead if possible. Now unfortunately at the time (a good few years ago) I wasn't as curious and caring about fundamentals as I am now so never did ask to the reason why. What's the communities opinion in this? Is it just someone's personal preference, or should triggers be avoided (just like cursors) unless there is a good reason for them.
The Wikipedia article on database triggers presents a good overview of what triggers are and when to use them in different databases. The following discussion is based on SQL Server only. Using triggers is quite valid when their use is justified. For example, they have good value in auditing (keeping history of data) without requiring explicit procedural code with every CRUD command on every table. Triggers give you control just before data is changed and just after the data is changed. This allows for: Auditing as mentioned before Validation and business security checking if so is desired. Because of this type of control, you can do tasks such as column formatting before and after inserts into database. I was always told, only use triggers if you really need to and opt to use stored procedures instead if possible. May be the some of the reasons for this are: Some functions that triggers used to do in the old days could now be performed in other ways such as updating totals and automatic calculation on a column. You don't see where the trigger is invoked by examining code alone without knowing they exist. You see their effect when you see the data changes and it is sometimes puzzling to figure out why the change occurred unless you know there is a trigger or more acting on the table(s). If you use several database controls such as CHECK, RI, Triggers on several table, your transaction detailed flow becomes complex to understand and maintain. You will need to know exactly what happens when. Again, you will need good documentation for this. Some differences between triggers and non-trigger stored procedures are (amongst others): A non-trigger stored procedure is like a program that has to be invoked explicitly either from code or from a scheduler or from a batch job, etc. to do its work, whereas a a trigger is a special type of stored procedure that fires as a response of an event rather than be directly executed by the user. The event may be a change of data in a data column for example. Triggers have types. DDL Triggers and DML Triggers (of types: INSTEAD OF, For, and AFTER) Non-Trigger Stored procedures can reference any type of object, however, to reference a view, you must use INSTEAD OF triggers. In SQLServer, you can have any number on non-trigger stored procedures but only 1 INSTEAD OF trigger per table.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/123074", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/42080/" ] }
123,121
I recently learned of a program called Total Commander. It's a Windows Explorer replacement and has its own stuff to copy files. To check whether the files are identical, instead of calculation a CRC, it literally checks every single byte, one at a time, on both the original and the copy. My question is: Is this necessary? Can CRC or any other such technique go wrong? Should you, as a programmer, try and implement this perfect but slow system, or is it too extreme?
Calculating CRCs (or, better, sha1sums) on both files requires reading every byte anyway. If you do a byte-by-byte comparison, you can quit as soon as you see a mismatch -- and you don't have to worry about two different files that happen to have the same checksum (though that's vanishingly unlikely for sha1sum in the absence of deliberate collisions, which became feasible after I originally wrote this answer ). So if you're doing the comparison locally, a byte-by-byte comparison will be at least as fast as a checksum comparison (unless you've already computed the checksums anyway). On the other hand, checksum comparisons are useful when you're comparing files that aren't on the same machine; the checksums can be computed locally, and you don't have to transfer the entire content over the network. Hybrid approaches are also possible. For example, you might compute and compare checksums for the two files a chunk at a time, which can avoid reading the whole files ( if they differ) while also avoiding transmitting the whole file across the network. The rsync protocol does something like this. Note that using a simple CRC gives you a fair chance of a collision, as Dave Rager mentioned in his answer. Use at least sha1sum, or even something more recent. (Don't try to invent your own hashing algorithm; the folks who developed sha1sum know far more about this stuff than either of us.) As for the likelihood of collision, if you use a decent hash like sha1sum you pretty much don't have to worry about it, unless someone is deliberately and expensively constructing files whose sha1sums collide (generating such collisions was not feasible when I first wrote this, but progress is being made ). Quoting Scott Chacon's "Pro Git" , section 6.1 : Here’s an example to give you an idea of what it would take to get a SHA-1 collision. If all 6.5 billion humans on Earth were programming, and every second, each one was producing code that was the equivalent of the entire Linux kernel history (1 million Git objects) and pushing it into one enormous Git repository, it would take 5 years until that repository contained enough objects to have a 50% probability of a single SHA-1 object collision. A higher probability exists that every member of your programming team will be attacked and killed by wolves in unrelated incidents on the same night. Summary : Byte-by-byte comparison is good for local comparisons. sha1sum is good for remote comparison, and presents no significant chance of accidental false positives. And there are newer checksum checksum algorithms that are (so far) less vulnerable to deliberate collisions than SHA1 is. I won't be too specific to avoid having this answer become obsolete yet again in a few years.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/123121", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/42097/" ] }
123,144
I'm developing a NES emulator as a hobby, in my free time. I use C++ because is the language I use mostly, know mostly and like mostly. But now that I made some advance into the project I realize I'm not using almost any specific features of C++, and could have done it in plain C and getting the same result. I don't use templates, operator overloading, polymorphism, inheritance. So what would you say? should I stay in C++ or rewrite it in C? I won't do this to gain in performance, it could come as a side effect, but the idea is why should I use C++ if I don't need it? The only features of C++ I'm using is classes to encapsulate data and methods, but that can be done as well with structs and functions, I'm using new and delete, but could as well use malloc and free, and I'm using inheritance just for callbacks, which could be achieved with pointers to functions. Remember, it's a hobby project, I have no deadlines, so the overhead time and work that would require a re-write are not a problem, might be fun as well. So, the question is C or C++?
You aren't using it now, but the next time you leak memory or get a double delete, you'll be begging to come back to std::vector<T> , std::unique_ptr<T, Del> and std::shared_ptr<T> , which can solve those problems easily- almost trivially. That's what happens to everyone who uses C over C++, eventually, and the smarter ones just don't wait for the bugs to pop up before moving over. Code that uses new and delete directly doesn't really belong in C++, it belongs in that kind of half house that we refer to as "C with Classes". That's where the language was circa 1985. It's not particularly similar to C++, circa 2011. In all likelihood, wherever you learned C++ simply didn't teach it very well- something that is unfortunately rather common- and with a better education, you would find use of these features. Specifically, as I listed above, C++'s generic data structures and the resource-managing classes simply are fundamentally massively superior to anything C has to offer. If you want a dynamically allocated array, then use std::vector<T> . That's a pretty common use case. If you aren't using them, then your code is at huge risk of error unnecessarily- especially resource management related. C++ can guarantee safety and re-use code in a way that C can never touch. However, I think that you also might be expecting too much. Writing templates and operator overloads is not common for library consumers. If your code uses std::vector<T> , you don't need to write a template to make that happen. If your code uses std::string , nobody is forcing you to overload your operators. You only have to do those things to write std::vector<T> and std::string - but you can still take full advantage of them. Polymorphism/inheritance also only has a specific use case. If your code happens to not require you to write any templates or use virtual functions, then it doesn't, and there are programs or segments of programs where you don't need to write your own templates. Also, there's no gain in performance in C over C++.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/123144", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/16586/" ] }
123,218
I've been putting time into learning functional programming and I've come to the part where I want to start writing a project instead of just dabbling in tutorials/examples. While doing my research, I've found that Erlang seems to be a pretty powerful when it comes to writing concurrent software (which is my goal), but resources and tools for development aren't as mature as Microsoft development products. F# can run on linux (Mono) so that requirement is met, but while looking around on the internet I cannot find any comparisons of F# vs Erlang. Right now, I am leaning towards Erlang just because it seems to have the most press, but I am curious if there is really any performance difference between the two systems. Since I am used to developing in .NET, I can probably get up to speed with F# a lot faster than Erlang, but I cannot find any resource to convince me that F# is just as scalable as Erlang. I am most interested in simulation, which is going to be firing a lot of quickly processed messages to persistent nodes. If I have not done a good job with what I am trying to ask, please ask for more verification.
What do you mean by "viable?" "Having the most press" is not necessarily the best way to choose a language. Erlang's claim to fame is its capability of massive parallelization. That's why it's commonly used in Ericsson phone switches. Erlang is soft-realtime, so you can make certain performance guarantees about it. F# benefits from the optimization capabilities of the .NET Jitter. In addition, the language itself is designed to be a high-performing functional language (it being a variant of OCaml, widely used in the financial industry because of its speed). Ultimately, unless you plan on running millions of tiny agents at the same time (which is what Erlang is optimized for), F# should be up to the task. This page explains the appropriate use cases for Erlang.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/123218", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/34841/" ] }
123,293
For someone who doesn't have much real world experience yet, the notion of maintainable code is a bit vague, even though it follows from typical good practice rules. Intuitively I can tell that code can be well written, but not maintainable if for example it hard-wires information that is subject to constant change, but I still have a hard time looking at code and deciding if it's maintainable or not. So the question is: what hurts maintainability? What should I look for?
There are many aspects to maintainability, but IMO the most important are loose coupling and high cohesion . Essentially, when you're working with good code, you're able to make small changes here and there without having to keep the whole codebase in your head. With bad code, you would have to take more things into account: fix here, and it breaks elsewhere. When you have 100+ kLOC of code in front of you, this is crucial. Oft-mentioned "good practice" rules like code formatting, commenting, variable naming, etc. are superficial compared to coupling/cohesion issues. The trouble with coupling/cohesion is that it's not easy to measure or see quickly. There are some academic attempts to do that, and maybe some static code analysis tools, but not anything I know of that would immediately give a reliable estimate of how hard time you're going to have with the code.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/123293", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/92/" ] }
123,305
Regarding source folder hierarchy, there are always some common features, such as the src , doc or test folders, which have rather easy-to-understand contents. However, I realized that big projects have both a lib and vendor folders, while I had always thought they were the same, as their names hint at including “third-party libraries from external vendors ”. Though, seeing both in the same project means there is a difference. I couldn't find any information nor on Google nor on sources such as the Filesystem Hierarchy Standard , even though this is actually a somehow common practice. Here is a more detailed example with Symfony : once you create a project, you get a lib folder at the root of your project. In this folder, the following structure is found: lib +--filter +--form +--… +--vendor +--simpletest +--symfony Here, the symfony folder contains all Symfony's core.
When I see a lib or libraries directory, I think of: Libraries, not plugins, modules, etc. OOP instead of procedural, where that's applicable (i.e. PHP) When I see a vendor directory, I think of: Libraries, plugins, modules, components, etc. Not just libraries, but anything that's provided by a third party. And stuff that's not code, like an icon set. When I see lib and vendor directories, I think of a few distinctions: lib holds only libraries, vendor may hold anything really, lib is where I should put my libraries, vendor where I should put anything third party (including code by the original author), lib is where libraries by the original author of the project are located (if that's not me), whereas vendor is where the original author put third party anything. You can safely assume that whatever is in lib is licensed under the same license as the rest of the project. Whichever one of the above applies, is reason enough to have different folders. AFAIK there is no generally accepted practice. Some communities have community wide common practices, but that's just about it. As for the specific Symfony example: Symfony is a framework and I think what the developers are trying to say is that in a Symfony application the framework's core libraries are vendor code, i.e. coming from a third party and not from the original author of the application (you).
{ "source": [ "https://softwareengineering.stackexchange.com/questions/123305", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/38485/" ] }
123,317
I am starting a project with following technical environment : .Net 4.0, Entity Framework 4.0, WPF with MVVM Architecture I saw lots of examples on the net, some books with this environment. In some of the examples authors had this Idea : Viemodel will have an instance of Model class (Entity Framework Entity e.g. Person) Bind the WPF view controls to the properties of Model While some authors did : Viemodel will expose all the properties of the model. Bind the WPF view controls to the properties of ViewModel rather than to the model directly. So is it a good idea to let the view bind properties from model rather than viewmodel exposing its own? Or which is more preferred?
I think a lot of programmers first try to take the shortcut of binding directly to the model, but in my experience this has some major drawbacks. The primary problem is that if your entity model is persisted by NHibernate or similar, then as soon as the View updates the model property, then NHibernate could persist those changes to the database. That doesn't work well for edit-screens that have a Save/Cancel button. In actual fact, it may choose to wait and persist everything as a batch, but the idea is that when you change the model, you're committing your change. So, you could still get away with binding directly to model properties on read-only screens, but then you're going to have an inconsistency. Additionally, most models don't implement INotifyPropertyChanged so they may not be suitable binding targets if the state of the screen changes after the initial display. Given the ease of auto-properties, I suggest always binding the View to the ViewModel, not to the Model. It's consistent, simple, and gives you the most flexibility to support changes in the future.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/123317", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/38347/" ] }
123,432
Recently, I was developing a set of coding standards for our company. (We're a new team branching out into a new language for the company.) On my first draft, I set the purpose of our coding standards as improving Readability, Maintainability, Reliability, and Performance. (I ignored writability, portability, cost, compatibility with previous standards, etc.) One of my goals while writing this document was to push through the idea of simplicity of code. The idea was that there should be only one function call or operation per line. My hope was that this would increase readability. It's an idea that I carried over from our previous language. However, I've questioned the assumption behind this push: Does simplicity always improve readability? Is there a case where writing simpler code decreases readability? It should be obvious, but by "simpler", I don't mean "easier to write", but less stuff going on per line.
"Simple" is an overused word. "Readable" can profitably be defined as "simple to understand", in which case increasing (this measure of) simplicity by definition increases readability, but I don't think this is what you mean. I've written about this elsewhere , but generally something can be called "simpler" either by being more abstract (in which case fewer concepts can express more phenomena) or by being more concrete (in which case a concept does not require as much background knowledge to understand in the first place). I'm arguing that, depending on perspective, a more abstract concept can reasonably be called simpler than a more concrete concept, or vice versa . This, even though "abstract" and "concrete" are antonyms. I'll use as an example some Haskell code I wrote a while ago. I asked a question on stackoverflow about using the List monad to calculate a counter in which each digit could have a different base. My eventual solution (not knowing much Haskell) looked like: count :: [Integer] -> [[Integer]] count [] = [[]] count (x:xs) = -- get all possible sequences for the remaining digits let remDigits :: [[Integer]] remDigits = count xs in -- pull out a possible sequence for the remaining digits do nextDigits <- remDigits -- pull out all possible values for the current digit y <- [0..x] -- record that "current digit" : "remaining digits" is -- a valid output. return (y:nextDigits) One of the answers reduced this to: count = mapM (enumFromTo 0) Which of these is "simpler" to understand (i.e. more readable) depends entirely on how comfortable the reader has become with (abstract) monadic operations (and, for that matter, point-free code). A reader who's very comfortable with these abstract concepts will prefer to read the (short) more abstract version, while one who is not comfortable with those operations will prefer to read the (long) more concrete version. There is no one answer about which version is more readable that will hold for everybody.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/123432", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/38547/" ] }
123,436
While I agree that catching ... without rethrowing is indeed wrong, I however believe that using constructs like this: try { // Stuff } catch (...) { // Some cleanup throw; } Is acceptable in cases where RAII is not applicable . (Please, don't ask... not everybody in my company likes object-oriented programming and RAII is often seen as "useless school stuff"...) My coworkers says that you should always know what exceptions are to be thrown and that you can always use constructs like: try { // Stuff } catch (exception_type1&) { // Some cleanup throw; } catch (exception_type2&) { // Some cleanup throw; } catch (exception_type3&) { // Some cleanup throw; } Is there a well admited good practice regarding these situations?
My coworkers says that you should always know what exceptions are to be thrown [...] Your coworker, I'd hate to say it, has obviously never worked on general-purpose libraries. How in the world can a class like std::vector even pretend to know what the copy constructors will throw, while still guaranteeing exception safety? If you always knew what the callee would do at compile-time, then polymorphism would be useless! Sometimes the entire goal is to abstract away what happens at a lower level, so you specifically don't want to know what's going on!
{ "source": [ "https://softwareengineering.stackexchange.com/questions/123436", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/13384/" ] }
123,449
I have been developing Windows applications in C++ for like 10 years now. And recently I've started digging into some Linux projects, and I can't stand how unproductive I am... I'm a fast learner, and I've been using Linux as a primary platform for some time now. And I do feel very comfortable with shell, OS principles and GUI. But when it comes to development, it feels like I'm back to school. As soon as I open some larger project, I'm stuck. Most of them are makefile based, so basically when I try to navigate them with QT or CodeBlocks, at best, I can use intellisense on a per-file basis. And most of the time variables leak from scope. Then there is a go-to-definition stuff, which seems nonexistent, try to join some larger project from sourceforge, and you're stuck for days, because navigating to definitions is so hard... grep -r "this_def" . --include "*.cpp" --include "*.h" seems so slow and clumsy. And then, the debugging, gdb does work, but no matter what I do, it seems like it's light years behind WinDbg or VisualStudio debugger. And these things are making me desperate, I want to write code, but it just goes so slow... I'm starting to think that Linux developers learn function definitions by heart and analyze code by eyes, but I can't believe it's so. Has anyone gone through this? Is there something that I'm missing that could make me more productive?
Interestingly enough I periodically have the same problem in the opposite direction. I'm primarily a UNIX coder, but I periodically have to port stuff to Windows. I can't tell you the number of times I've wanted to pull my hair out because I can't find the appropriate check box for a compiler option buried in one of 35 preference setting pages for a project. I'd rather just open up the proj file and add the XML myself. Moving in either direction, the secret is to have patience, and learn the tool set for the platform you are trying to work in. Of course you are going to be frustrated, it's new, and it's unfamiliar, and you are reduced to newbie status all over again. There is no way to avoid this. In your particular case there are some additional tools you should be aware of. The first is DDD , a GUI front end for gdb. It's not as slick as Visual Studio, but it will hold your hand. However, I'd really recommend biting the bullet, and set about learning the ins and outs of gdb. In truth, if you are a regular user, there isn't a lot of difference between memorizing which commands to type vs memorizing which dialog box you have to bring up to change a setting. You also need to know about tools like CScope and CTags . As much you may resist, I would suggest learning VIM or EMACS . They integrate well with tag tools I just mentioned. When in Rome, do as the Romans do. You can find extensions for VIM and EMACS that will do code completion for you. My own experience with tools that offer code completion is is that yes, it does saving some typing, but in general typing is easy. Thinking is what's hard. Your opinion may differ, particularly if you have carpal tunnel syndrome. As for make. Make is admittedly horrible, but you probably just going to have to suck it up and learn it.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/123449", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/8029/" ] }
123,495
I am web developer and have experience of developing several web applications in PHP. I have an idea of developing a product for myself and decided to use a MVC based framework because I really like the idea of MVC and how one can easily manage and modify the application without any difficulty. I chose Zend Framework and it seems more difficult than learning a new programming language. There are so many things going at one time even to run a small application. Similarly the idea of routing is very complex as it is new for a core programmer. I know that the guys here read thousand of such questions like one I am asking but I am not looking to learn Zend Framework overnight. I am willing to give as much time as it needs but until now it's making no sense to me. There are thousand of classes in the Zend library, but how would a noob know where to use a specfic class and how to use it? I am still finding it very difficult to understand the bootstrap of Zend Framework and its mapping. I read the manual, follow it and things start working but I really don't exactly how they are actually happening. I also still have no clue how models, views and controllers work together and how to plan an application in Zend Framework. When it comes to core php I exactly have the idea in my mind what to do and than easily translate them in code but in Zend Framework I don't know how to translate my idea.
Zend Framework is hard. It wasn't built as an entry level framework, knowledge of the concepts involved is assumed 1 . That said, the first requirement for Zend Framework 2.0 is to make it a little bit easier: Ease the learning curve In late 2009, we did a survey of framework users to determine what they use, what environments they use, and what their needs are. The top issue, bar none, was the difficulty of learning the framework. Some of these issues include: Difficulty in the "first hour" with the framework. Uncertainty about the "next steps" following the quick start. Inconsistent APIs in the source code itself. One component may use "plugins," another "helpers," and yet another "filters." Uncertainty about where extension points exist, and how to program for them. Confusion over whether they can use Zend Framework only as an MVC stack or as individual components. So it's not just you, it's hard for everyone - read the whole wiki page, there are quite a few things that are identified as unnecessarily complex. But even if the above requirement is fulfilled, still it won't become an entry level framework, meaning that it's not a framework you should be learning on, but one that you should be using when you've actually understood the concepts involved. Since you are still learning, it would be a lot more valuable to build your own MVC architecture. Rasmus Lerdorf's notorious 2 " The no-framework PHP MVC framework " blog post gives a very simple and clean example of MVC through procedural PHP, without any framework or other third party library involved. But if you really want to learn with a framework, you should consider a micro framework instead of a full blown one. Slim has a very small, clean and thoroughly tested code base and it should be ideal for learning. I haven't played around with any other micro framework, you should do your own research and decide which one is better for you. And for a quick and dirty introduction to routing, see my answer to this question . It's not a very hard concept to grasp, but Zend Framework does make it look like a lot more than it actually is . 1 The best description I've read for ZF is that's it's a framework building framework , not an application framework. It's raw power and extreme list of features aren't suitable for small to medium websites. Unfortunately can't really find where I read that. 2 Read disclaimer at the top of the blog post. Update, inspired by @Karpie's comment: A framework is not supposed to be hard, the whole point of a framework is to make things easier. It's possible that even with a firm grasp of the concepts involved, ZF is not a good fit for you. There are a lot of subjective factors involved when choosing a framework, and unless every other framework lacks functionality you absolutely need - and can't write on your own, you should avoid ZF and use a framework that feels more natural to you. If you know the concepts, the framework shouldn't be getting in the way.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/123495", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/42251/" ] }
123,515
If someone writes code so that an internal variable $_fields is accessible without using getter/setter methods, is there a proper term used to describe that? Something polite enough to use with management :)
Exposing ones private members is never a good thing in polite society... This practice is the lack of/poor encapsulation.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/123515", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/40473/" ] }
123,627
I’ve been hearing about the London style vs. Chicago style (sometimes called Detroit style) of Test Driven Development (TDD). Workshop of Utah Extreme Programming User's Group: Interaction-style TDD is also called mockist-style , or London-style after London's Extreme Tuesday club where it became popular. It is usually contrasted with Detroit-style or classic TDD which is more state-based. Jason Gorman's workshop : The workshop covers both the Chicago school of TDD (state-based behaviour testing and triangulation), and the London school , which focuses more on interaction testing, mocking and end-to-end TDD, with particular emphasis on Responsibility-Driven Design and the Tell, Don't Ask approach to OO recently re-popularized by Steve Freeman's and Nat Pryce's excellent Growing Object-Oriented Software Guided By Tests book. The post Classic TDD or "London School"? by Jason Gorman was helpful, but his examples confused me, because he uses two different examples instead of one example with both approaches. What are the differences? When do you use each style?
Suppose you have class called "ledger" a method called "calculate" that uses a "Calculator" to do different types of calculations depending on the arguments passed to "calculate", for example "multiply(x, y)" or "subtract(x, y)". Now, suppose you want to test what happens when you call ledger.calculate("5 * 7"). The London/Interaction school would have you assert whether Calculator.multiply(5,7) got called. The various mocking frameworks are useful for this, and it can be very useful if, for example, you don't have ownership of the "Calculator" object (suppose it is an external component or service that you cannot test directly, but you do know you have to call in a particular way). The Chicago/State school would have you assert whether the result is 35. The jUnit/nUnit frameworks are generally geared towards doing this. Both are valid and important tests.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/123627", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/41817/" ] }
123,956
I am new to Java; through my studies, I read that reflection is used to invoke classes and methods, and to know which methods are implemented or not. When should I use reflection, and what is the difference between using reflection and instantiating objects and calling methods the traditional way?
Reflection is much slower than just calling methods by their name, because it has to inspect the metadata in the bytecode instead of just using precompiled addresses and constants. Reflection is also more powerful: you can retrieve the definition of a protected or final member, remove the protection and manipulate it as if it had been declared mutable! Obviously this subverts many of the guarantees the language normally makes for your programs and can be very, very dangerous. And this pretty much explains when to use it. Ordinarily, don't. If you want to call a method, just call it. If you want to mutate a member, just declare it mutable instead of going behind the compile's back. One useful real-world use of reflection is when writing a framework that has to interoperate with user-defined classes, where the framework author doesn't know what the members (or even the classes) will be. Reflection allows them to deal with any class without knowing it in advance. For instance, I don't think it would be possible to write a complex aspect-oriented library without reflection. As another example, JUnit used to use a trivial bit of reflection: it enumerates all methods in your class, assumes that all those called testXXX are test methods, and executes only those. But this can now be done better with annotations instead, and in fact JUnit 4 has largely moved to annotations instead.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/123956", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/42404/" ] }
123,961
We're teaming up with some non programmers (writers) who need to contribute to one of our projects. Now they just don't like the idea of using Git (or anything for that matter) for version controlling their work. I think this is because they just don't find it worthwhile to wrap their heads around the twisted concepts of version control. (when I first introduced them to branching and merging -- they looked like I was offending them.) Now, we're not in a position to educate them or convince them to use it. We're just trying to find alternatives so that we get all their work versioned (which is what we need) -- and they get easy workflow and concentrate on what they do. I have come up with some ideas... tell them to save their work as a separate file every time they make some non-trivial change, and then use a diff on our side to just track changes. write a program (in Python) that implements the "milestones" in CSSEdit in some way. About the project: It is a natural language processing system (written in C + Python). We've hired some writers to prepare inputs for the system in different languages. And as we evolve the software, we'd need those writers to make changes to their inputs (articles). Sometimes the changes are very small (a word or two), and other times big. The reason we need to version control those changes is because every small/big change in the input has the potential to change the system's output dramatically.
when I first introduced them to branching and merging -- they looked like I was offending them This is probably because branching and merging are advanced concepts, and infinitely less useful than to simply keep track of the changes. So why not explain just "commit" (save) and "update"? Two really simple concepts. I'm sure you can explain it in less than 10 minutes. If you really want to use separate branches and stuff like that, you can do that part yourself without involving them with it.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/123961", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/31560/" ] }
124,109
I am interested in learning design patterns and would like to know what are considered top tier books in learning this subject. Is there a book out there that's the de-facto standard for describing best practices, design methodologies, and other helpful information on design patterns? What about that book makes it special?
Design Patterns: Elements of Reusable Object-Oriented Software comes very close to my definition of a canonical book on design patterns. According to its wikipedia article (emphasis mine): The original publication date of the book was October 21, 1994 with a 1995 copyright, and as of July 2010, the book was in its 38th printing . The book was first made available to the public at OOPSLA meeting held in Portland, Oregon, in October 1994. It has been highly influential to the field of software engineering and is regarded as an important source for object-oriented design theory and practice. More than 500,000 copies have been sold in English and in 13 other languages . Ward Cunningham , a design patterns pioneer, maintains an online catalog of the book's patterns on WikiWikiWeb . And according to the Wikipedia article on design pattern (again, emphasis mine): Design patterns gained popularity in computer science after the book Design Patterns: Elements of Reusable Object-Oriented Software was published in 1994 by the so-called "Gang of Four" (Gamma et al.). There are quite a few other books referenced in the same article as notable in the genre: Pattern-Oriented Software Architecture Volume 1: A System of Patterns , by Douglas Schmidt, Michael Stal, Hans Rohnert, and Frank Buschmann, Patterns of Enterprise Application Architecture by Martin Fowler, Enterprise Integration Patterns: Designing, Building, and Deploying Messaging Solutions ., by Hohpe, Gregor and Bobby Woolf, and Head First Design Patterns , by Eric T. Freeman, Elisabeth Robson, Bert Bates, and Kathy Sierra. Of those I've read Fowler's book, it's highly influential and a good read. At certain points it's a little vague for my taste, but overall it's an enjoyable book. There's an online catalog of the patterns included in the book, with minimal descriptions. I've also skimmed through Head First Design Patterns, and if you have read any other book of the Head First series, it's of the same high quality and as enjoyable as most books in the series : Head First is a series of introductory instructional books to many topics, published by O'Reilly Media. It stresses an unorthodox, visually intensive, reader-involving combination of puzzles, jokes, nonstandard design and layout, and an engaging, conversational style to immerse the reader in a given topic. The term "design pattern" is somewhat vague, as every general reusable solution can be considered a design pattern. I've always noticed a tendency to apply the label on the solutions described in one of the notable books I've listed above, and more specifically the Gang of Four and Fowler books. Design patterns do not follow a unique development process, they are normal software solutions that happen to be immensely reusable and they are extremely hard to identify . But if you compare the online catalogs for both books with the contents of language specific books you'll notice that they are often used as templates. So I'd say that both books are very close to being canonical references, with the GoF book being the more important one from a historical perspective, even though both books are limited to object oriented programming .
{ "source": [ "https://softwareengineering.stackexchange.com/questions/124109", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/39465/" ] }
124,146
It's often said that the software industry is immature compared to manufacturing. Specifically with regard to being process driven. Question : Can we as developers learn from the processes of the manufacturing industry? Can adopting their processes increase the success rate of software development? My take : In manufacturing the creation of a product is heavily process driven. You may have a factory where each person has a specific set of tasks they follow. A worker (or robot) may tighten a screw all day long. Then the next task in the process is performed by the next specialist. The workers (and robots) do not deter from the process or make something up "on the fly". The parts churn through the through the process, and the output is a finished product. It works well and companies achieve 99.99966% defect free products. Companies iron out inefficiencies over time. This is impressive and very well may be the sign of a mature industry. In manufacturing a defined process can literally create the finished product. I don't think this is the case in software. We may have processes for source control, code review, check in sheets, requirements gathering, the SDLC, etc. But executing those processes does not in and of itself create a finished product. These processes may be beneficial, but are orthogonal to the actual creation. Suppose your company is contracted to create software that will search millions of images to find the faces of a criminals. Despite the heavy process driven environment, the developer must engage in creating things "on the fly". Doing things on the fly is against the spirit of manufacturing. A good manufacturing process can be executed without thought by a robot. For the creation of complex algorithms which have yet to be fathomed in the mind of a human, it is a necessity to create things on the fly. Software development is not the following of a process, but the creation of a processes to be exucuted by a computer. That is a fundamental difference. No matter how many orthogonal processes we put up around development, we will always resort to doing it "on the fly" when it comes to creation. Everyone I talk to seems to agree with the manufacturing mindset. Am I alone in my thoughts?
The fundamental difference between software development and manufacturing is that for software, the design phase is practically the entire thing. The actual production (assembly line part in traditional manufacturing) is a matter of copying a few files around. In software, the product design is the product. So yes, we can learn from manufacturing processes, but only if we keep in mind that we have to look at the design phase, not the production phase, and that many manufacturing processes are built to cope with the specific limitations of an expensive production chain, which simply doesn't apply to software. One example of a process model that works very well in software, but often fails horribly in product design, is iterative design - you start with a minimal prototype, and add features with each iteration. Building a prototype car to see what the new rear window knob design looks like is ridiculous, but in software, it makes sense (just hit F5 and wait a few seconds - voilà, ready to test drive). If we do look at product design processes, the best industries to look at fall into two categories: those where production can be realized at commodity rates; e.g. the record industry (1% or less of the price for a CD is baking and printing), graphical media, etc. those where quantities are so low that the design phase is the most prominent cost factor (luxury articles, highly customized products, niche markets...) It is a fundamental error to try and apply processes from physical manufacturing to software development: software development is not repetitive (if it is at your job, go find another job), it is only partially predictable, there are only very few physical limitations (hardware speed being one), and both the approach taken and the results can be highly personal. Applying assembly-line philosophies to what is basically a matter of analytic and creative thinking can (and often does) have devastating effects.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/124146", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/40538/" ] }
124,193
Continuous Delivery sounds good, but my years of software development experience suggest that in practice it can't work. (Edit: To make it clear, I always have lots of tests running automatically. My question is about how to get the confidence to deliver on each checkin, which I understand is the full form of CD. The alternative is not year-long cycles. It is iterations every week (which some might consider still CD if done correctly), two weeks, or month; including an old-fashioned QA at the end of each one, supplementing the automated tests.) Full test coverage is impossible. You have to put in lots of time -- and time is money -- for every little thing. This is valuable, but the time could be spent contributing to quality in other ways. Some things are hard to test automatically. E.g. GUI. Even Selenium won't tell you if your GUI is wonky. Database access is hard to test without bulky fixtures, and even that won't cover weird corner cases in your data storage. Likewise security and many other things. Only business-layer code is effectively unit-testable. Even in the business layer, most code out there is not simple functions whose arguments and return values can be easily isolated for test purposes. You can spend lots of time building mock objects, which might not correspond to the real implementations. Integration/functional tests supplement unit tests, but these take a lot of time to run because they usually involve reinitializing the entire system on each test. (If you don't reinitialize, the test environment is inconsistent.) Refactoring or any other changes break lots of tests. You spend lots of time fixing them. If it's a matter of validating meaningful spec changes, that's fine, but often tests break because of meaningless low-level implementation details, not stuff that really provides important information. Often the tweaking is focused on reworking the internals of the test, not on truly checking the functionality that is being tested. Field reports on bugs cannot easily be matched with the precise micro-version of the code.
my years of software development experience suggest that in practice it can't work. Have you tried it? Dave and I wrote the book based on many collective years of experience, both of ourselves and of other senior people in ThoughtWorks, actually doing the things we discuss. Nothing in the book is speculative. Everything we discuss has been tried and tested even on large, distributed projects. But we don't suggest you take it on faith. Of course you should try it yourself, and please write up what you find works and what doesn't, including the relevant context, so that others can learn from your experiences. Continuous Delivery has a big focus on automated testing. We spend about 1/3 of the book talking about it. We do this because the alternative - manual testing - is expensive and error-prone, and actually not a great way to build high quality software (as Deming said, "Cease dependence on mass inspection to achieve quality. Improve the process and build quality into the product in the first place") Full test coverage is impossible. You have to put in lots of time -- and time is money -- for every little thing. This is valuable, but the time could be spent contributing to quality in other ways. Of course full test coverage is impossible, but what's the alternative: zero test coverage? There is a trade-off. Somewhere in between is the correct answer for your project. We find that in general you should expect to spend about 50% of your time creating or maintaining automated tests. That might sound expensive until you consider the cost of comprehensive manual testing, and of fixing the bugs that get out to users. Some things are hard to test automatically. E.g. GUI. Even Selenium won't tell you if your GUI is wonky. Of course. Check out Brian Marick's test quadrant. You still need to perform exploratory testing and usability testing manually. But that's what you should be using your expensive and valuable human beings for - not regression testing. The key is that you need to put a deployment pipeline in place so that you only bother running expensive manual validations against builds that have passed a comprehensive suite of automated tests. Thus you both reduce the amount of money you spend on manual testing, and the number of bugs that ever make it to manual test or production (by which time they are very expensive to fix). Automated testing done right is much cheaper over the lifecycle of the product, but of course it's a capital expenditure that amortizes itself over time. Database access is hard to test without bulky fixtures, and even that won't cover weird corner cases in your data storage. Likewise security and many other things. Only business-layer code is effectively unit-testable. Database access is tested implicitly by your end-to-end scenario based functional acceptance tests. Security will require a combination of automated and manual testing - automated penetration testing and static analysis to find (e.g.) buffer overruns. Even in the business layer, most code out there is not simple functions whose arguments and return values can be easily isolated for test purposes. You can spend lots of time building mock objects, which might not correspond to the real implementations. Of course automated tests are expensive if you build your software and your tests badly. I highly recommend checking out the book "growing object-oriented software, guided by tests" to understand how to do it right so that your tests and code are maintainable over time. Integration/functional tests supplement unit tests, but these take a lot of time to run because they usually involve reinitializing the entire system on each test. (If you don't reinitialize, the test environment is inconsistent.) One of the products I used to work on has a suite of 3,500 end-to-end acceptance tests that takes 18h to run. We run it in parallel on a grid of 70 boxes and get feedback in 45m. Still longer than ideal really, which is why we run it as the second stage in the pipeline after the unit tests have run in a few minutes so we don't waste our resources on a build that we don't have some basic level of confidence in. Refactoring or any other changes break lots of tests. You spend lots of time fixing them. If it's a matter of validating meaningful spec changes, that's fine, but often tests break because of meaningless low-level implementation details, not stuff that really provides important information. Often the tweaking is focused on reworking the internals of the test, not on truly checking the functionality that is being tested. If your code and tests are well encapsulated and loosely coupled, refactoring will not break lots of tests. We describe in our book how to do the same thing for functional tests too. If your acceptance tests break, that's a sign that you're missing one or more unit tests, so part of CD involves constantly improving your test coverage to try and find bugs earlier in the delivery process where the tests are more fine-grained and the bugs are cheaper to fix. Field reports on bugs cannot easily be matched with the precise micro-version of the code. If you're testing and releasing more frequently (part of the point of CD) then it is relatively straightforward to identify the change that caused the bug. The whole point of CD is to optimize the feedback cycle so you can identify bugs as soon as possible after they are checked in to version control - and indeed, preferably before they're checked in (which is why we run the build and unit tests before check-in).
{ "source": [ "https://softwareengineering.stackexchange.com/questions/124193", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/14493/" ] }
124,233
I was trying to solve a hobby problem that required generating a million random numbers. But I quickly realized, it is becoming difficult to make them unique. I picked up Algorithm Design Manual to read about random number generation. It has the following paragraph that I am fully not able to understand. Unfortunately, generating random numbers looks a lot easier than it really is. Indeed, it is fundamentally impossible to produce truly random numbers on any deterministic device. Von Neumann [Neu63] said it best: “Anyone who considers arithmetical methods of producing random digits is, of course, in a state of sin.” The best we can hope for are pseudo-random numbers, a stream of numbers that appear as if they were generated randomly. Why is it impossible to produce truly random numbers in any deterministic device? What does this sentence mean?
One should look for a cryptographically secure pseudo-random number generator . Most PRNG are linear congruence generators (so next number is a linear function of previous number ), so if you plot next number vs previous number you'll get a chart of parallel lines. A CSPRNG will not do that. The trade-off is that they're slow. I group random number generators into 3 categories : Good enough for homework. Good enough to bet your company on. Good enough to bet your country on. Why is it impossible to produce truly random numbers in any deterministic device ? A deterministic device will always produce the same output when given the same starting conditions and inputs - that is what it means to be deterministic . "Truly random number" is more of a philosophical viewpoint, as what does it mean to be random is the crux of the philosophical navel gazing (folks aren't even certain if atomic decay is random or follows some pattern we just can't figure out yet). A cryptographically secure random number generator is going to take some external source of entropy to make the device non-deterministic.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/124233", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/17887/" ] }
124,254
During a discussion, one of my colleagues told that he has some difficulties with his current project while trying to solve bugs. "When I solve one bug, something else stops working elsewhere", he said. I started to think about how this could happen, but can't figure it out. I have sometimes similar problems when I am too tired/sleepy to do the work correctly and to have an overall view of the part of the code I was working on. Here, the problem seems to be for a few days or weeks, and is not related to the focus of my colleague. I can also imagine this problem arising on a very large project, very badly managed , where teammates don't have any idea of who does what, and what effect on other's work can have a change they are doing. This is not the case here neither: it's a rather small project with only one developer. It can also be an issue with old, badly maintained and never documented codebase , where the only developers who can really imagine the consequences of a change had left the company years ago. Here, the project just started, and the developer doesn't use anyone's codebase. So what can be the cause of such issue on a fresh, small-size codebase written by a single developer who stays focused on his work ? What may help? Unit tests (there are none)? Proper architecture (I'm pretty sure that the codebase has no architecture at all and was written with no preliminary thinking), requiring the whole refactoring? Pair programming? Something else?
It doesn't have much to do with focus, project size, documentation, or other process issues. Problems like that are usually a result of excessive coupling in the design, which makes it very difficult to isolate changes.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/124254", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/6605/" ] }
124,341
I recently found out that Facebook had a programming challenge that if completed correctly you automatically get a phone interview. There is a sample challenge that asks you to write an algorithm that can solve a Tower of Hanoi type problem. Given a number of pegs and discs, an initial and final configuration; Your algorithm must determine the fewest steps possible to get to the final configuration and output the steps. This sample challenge gives you a 45 minute time limit but allows you to still test your code to see if it passes once your time limit expires. I did not know of any cute math solution that could solve it, and I didn't want to look for one since I think that would be cheating. So I tried to solve the challenge the best I could on my own. I was able to make an algorithm that worked and passed. However, it took me over 4 hours to make, much longer than the 45 minute requirement. Since it took me so much longer than the allotted time, I have not attempted the actual challenge. This got me wondering though, in reality does it really matter that it took me that long? I mean is this a sign that I will not be able to get a job at a place like this (not just Facebook, but Google, Fog Creek, etc.) and need to lower my aspirations, or does the fact that I actually passed on my first attempt even though it took too long be taken as good?
In practice it does matter how long it takes you. One that can solve the problem in 45 minutes is - all else equal - five times more productive than one that takes 4 hours, and hence more attractive to an employer. That said, you do not say why you took four hours to solve this problem. Were you at your best (well-fed, not tired, fully concentrated)? Was the problem well specified, or did you need to do additional research on your own? Did you have to learn new things to do this? Were the tools familiar or not? etc. Any and all of these things might influence the time it takes you, and it is actually more important to be able to solve a problem when under pressure, without being told everything, and with the tools at hand, since that WILL happen during your career and it is usually at a point where it is very important to somebody whether you succeed or not.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/124341", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/1785/" ] }
124,500
I've just learnt how lazy evaluation works and I was wondering: why isn't lazy evaluation applied in every software currently produced? Why still using eager evaluation?
Lazy evaluation requires book-keeping overhead- you have to know if it's been evaluated yet and such things. Eager evaluation is always evaluated, so you don't have to know. This is especially true in concurrent contexts. Secondly, it's trivial to convert eager evaluation into lazy evaluation by packaging it into a function object to be called later, if you so wish. Thirdly, lazy evaluation implies a loss of control. What if I lazily evaluated reading a file from a disk? Or getting the time? That's not acceptable. Eager evaluation can be more efficient and more controllable, and is trivially converted to lazy evaluation. Why would you want lazy evaluation?
{ "source": [ "https://softwareengineering.stackexchange.com/questions/124500", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/42742/" ] }
124,601
Our (mid-size) company (with 150 employees) has witnessed considerable growth since last year, and as a result we have been conducting lots of interviews to hire more and more engineers to bring new projects in motion. To begin with, following is our interview process Screening resume by HR Telephone technical interview (conducted by senior engg) Final technical + HR interview (conducted by team-lead/PM) After some analysis we found that telephone interviews are not really becoming effective, due to which we are not able to find good number of engineers in final interview. My analysis revealed that engineers don't consider interviewing as important/serious activity, since it's internal and non-billable. Also I see that engineers don't find it really prestigious to carry a label of "interviewer" (at-least telephone interviewer). We do have set of good experienced engineers in company who can take any interview very effectively, but since number of telephonic interviews is large, this pool is not sufficient. Hence we try to use other engineers as well. What can be done to inspire engineers to conduct telephone interviews seriously and positively (not treating this as mundane job)? [Update-2011/12/28] Thanks for sharing your valuable opinions. It really helped me to look at problem from variety of different angles. Alongside I was discussing issues with few colleagues as well and I personally feel that there is no single silver bullet for such issue. I will need to think recruitment process as a whole unit and plan to improve different areas/stages in it.
<Angry_rant_against_the_Billable_Hours_mind_set> Truth be told, your company doesn’t deserve good developers. I’ve worked for many consulting firms. When management continually stresses the importance of billable hours at every meeting and on every year end review. When the top guy at the firm gets up in front of the entire company and says “THE ONLY THING THAT MATTERS IS BILLABLE HOURS”. What do you expect from your developers? Bringing in good competition for billable hours will most likely reduce the consultant's (the one conducting the interview) billable hours. (By crowding the pool available hours) Guess what else, and this is the part I hate most about this mind set: It rewards inefficacy and punishes efficacy. I have been told by a manager “Don’t you dare tell anyone you finished the project with only half the allotted hours, how can we bill for time not worked?”. (So yes, this is a time and material contract, where we basically stole from the client) I say this will full confidence that the following is true: every single company that works with time and material contacts, are thieves. They will never give up more than a few hours of the allocated budget, yet will kick and scream, if they go over. (The only reason they give a few hours is so they can go back to the client proclaiming “Great News! We are under budget!”) It’s Thievery. I say it again, your company doesn’t deserve good developers. </Angry_rant_against_the_Billable_Hours_mind_set> To answer the Question (per the comment below) : He simply needs to find someone who cares more about the well being of the firm, then his billable hours bonus to conduct the interview Edit 12/17/11 - Saw this Dilbert and decided to added to add it here:
{ "source": [ "https://softwareengineering.stackexchange.com/questions/124601", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/42846/" ] }
124,627
I'm designing a small library and there is a strong temptation to provide synonyms to some of the exported functions. var foldl = function(){ ... }; var reduce = foldl; //reduce is a synonym for foldl I imagine that the multiple names might help the initial learnability of the API (since the user is more likely to stumble on the function he wants) but at the same time I worry the duplication will bring needless complexity. What rules should I keep in mind when thinking about adding a synonym to a function in my API? When whould I choose a "there is more then one way to do it" api and when should I incentivize there is one way to do it" instead?
Don't provide multiple ways to do the same thing - that will just confuse the API users. Having several names for the same things means you don't have a good name for it.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/124627", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/25404/" ] }
124,630
After reading a question about books every programmer should read , I wonder if the following should be considered obsolete: Code complete : 1st edition in 1993; 2nd edition in May, 2004 Introduction to algorithms (Cormen, Leiserson, Rivest, Stein) : 1st edition in 1990; 2nd edition in September, 2001; 3rd edition in 2009 The Pragmatic Programmer : October, 1999 Structure and Interpretation of Computer Programs : 1st edition in 1984; 2nd edition in September, 1996 The C Programming Language : 1st edition in 1978; 2nd edition in April, 1988 All these books seem very old. Isn't there a difference between modern computing and what was current when those book were written? For example, my 61-year-old teacher explains things very nicely but forgets to take into account all that has been done between when he began to teach 25 years ago and now. Isn't the same true for those books? Aren't there any more modern books that teach principles and technologies that are closer to current practice? Or do you consider them to be useful and relevant even today?
These book are about the principles of development. These principles are, by nature, language-agnostic, and for some even paradigm-agnostic (OOP, functional programming, imperative languages). They explain the theory and good ways of the development because, in the end, software is always about getting data, processing it, then outputting it back. Facebook, Twitter, 3D, batch process of accountancy, railway traffic management, launching rockets, etc. Books that are about a language, like "How to learn XXXXX in YY days" , where XXXXX is a language and YY is a number that will eventually (and sometimes actually very quickly) become obsolete, because, by nature, they are about things that either evolve, or get replaced and become outdated. Code Complete , by the awesome Steve McConnell, is perhaps the book that made me realize this. And the Pragmatic Programmer totally changed the vision I had of software development. By reading such books, you realize that 95% of the problem you're facing everyday has already been solved, and that 95% of us are still reinventing the wheel. The so-called "Cloud" isn't the future of software development, it's a way to use developed software. Don't fall into the trap of hype/bullshit buzzwords, focus on how you can improve your software craftsman skills. Focus on learning from what other brilliant spirits have invented and learned before us, because it is the only way to become an accomplished developer.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/124630", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/5391/" ] }
124,687
Ever since I started programming I've seen a header at the top of most code files indicating some sort of copyright: e.g. /* Copyright (c) 1998 Innotech */ or /* Copyright (c) 1998-2008 Innotech */ Conceptually I get the idea... depending on your wants/needs it roughly translates somewhere between: Hey check it out! I made this! I'm awesome! to Don't copy/redistribute this! Our lawyers will come after you if you do! On the one hand I find the whole thing somewhat humorous as often this is a file that no one outside of in-house developers will ever see, but presuming this line may actually one day "mean something" I'm curious about the date part. Does a single date imply that the author claims copyright of the file from that date until eternity? If a date range is used, and not kept up to date has the developer invalidated his/her own copyright beyond the date range? Without some additional legal filing of some kind - does the copyright header provide any real legal strength of any kind? or are we all just fooling ourselves.
These answers were extracted from the book Patents, Copyright and Trademark , highly recommended. If you plan to buy one, notice that there's a newer edition than that I have. Does a single date imply that the author claims copyright of the file from that date until eternity? "The copyright lasts for the life of the author plus 70 years. However, under the following circunstances, the copyright lasts between 95 and 120 years, depending on the date the work is published:" ... "The work belongs to the author's employer under work made for hire principles." ... (page 192) If a date range is used, and not kept up to date has the developer invalidated his/her own copyright beyond the date range? No, simply the copyright will last from the last recorded date onwards. Without some additional legal filing of some kind - does the copyright header provide any real legal strength of any kind? or are we all just fooling ourselves. "Contrary to popular belief, providing a copyright notice or registering the work with the USCO is not necessary to obtain basic copyright protections. But there are some steps that can be taken to enhance the creator's ability to sue or stop others from copying:" ... "Place a copyright notice on a published work. (...) Placing this notice on a published work (...) prevents others from claiming that they did not know that the work was covered by copyright. This can be important if the author is forced to file a lawsuit to enforce the copyright, since it is much easier to recover significant money damages from a deliberate (as opposed to innocent) copyright infringer." ... (Pages 190-191) For clarification on the issue of registration, mentioned in the comments, the book states, under the same bullet list as the previous excerpt: ... Register works with the USCO. Timely registration (...) makes it much easier to sue and recover from an infringer. Registration creates a legal presumption that the copyright is valid and, if accomplished prior to someone copying the work, allows the copyright owner to recover up to $ 150,000 (and possibly attorney fees) without proving any actual money harm. (...)
{ "source": [ "https://softwareengineering.stackexchange.com/questions/124687", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/3199/" ] }
124,783
I've read here on P.SE more questions in which beginner programmers are advised to pick Python as their first programming language. Don't get me wrong, I like Python. I like it a lot! But its philosophy revolves around "We are all consenting adults here". From an experience/knowledge point of view, a beginner programmer is not an adult. Which then kinda means it is easier to shoot yourself in the foot and pick up some bad habits even if you survive the wound. I'm thinking that in a "more static" language it would be harder to shoot yourself in the foot as it will be more restrictive. Back to my question. Why is Python recommended as an entry level programming language? What are the points that make it good for teaching a programming language. Or... is it personal preference of the adviser?
IMO, the most prominent points that speak for Python as an entry-level language are these: it has a shallow learning curve - going from nothing to "Hello world" is much faster than in most other languages it is intuitive - the syntax was designed to follow the principle of least surprise, and it is very consistent overall (unfortunately, the standard libraries don't always follow this consistency) it requires very little boilerplate : a typical "Hello world" is one line of code, and simple programs can be written without any additional background noise that needs to be explained (such as function declaration keywords, import statements, class constructs, preprocessor directives, etc.) there are excellent, straightforward tools to work with python code, especially the interactive interpreter; you don't need to learn a build system, IDE, special text editor, or anything else to start using python - a command prompt, the interactive editor, and a simple text editor, are all you need it uses dynamic typing , but unlike many other dynamically-typed languages, types are transparent, and type-related pitfalls are rare
{ "source": [ "https://softwareengineering.stackexchange.com/questions/124783", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/38417/" ] }
124,835
If you ask programmers why they should write clean code, the number one answer you get is maintainability. While that's on my list, my main reason is more immediate and less altruistic: I can't tell if my new code is correct if it's too dirty. I find that I have focused on individual functions and lines of code so much that when I finish my first draft and step back to look at the big picture again it sometimes doesn't fit together very neatly. Spending an hour or two refactoring for cleanliness frequently uncovers copy/paste errors or boundary conditions that were very difficult to detect in the rough draft. However, some people feel it's occasionally okay to intentionally check in dirty code in the interest of shipping software, with a plan to "clean it up later." Is there is some practicable technique that gives them confidence in the correctness of their code when the readability is less than ideal? Is it a skill worth trying to develop? Or is a lack of confidence in the code something some people just find easier to accept?
They don't. I'm currently working on a code base created by "quick and dirty" programmers who would "clean it up later." They're long gone, and the code lives on, creaking its way toward oblivion. Cowboy coders , in general, simply don't understand all of the potential failure modes that their software may have, and don't understand the risks to which they are exposing the company (and clients).
{ "source": [ "https://softwareengineering.stackexchange.com/questions/124835", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/3965/" ] }
124,860
Some code is written to generate Excel Spreadsheets (Office Interop). The code performs very poorly. A subsystem is designed to generate the files at night. Performance isn't a concern at night. A function is created to pick the correct file from the 100 different files available depending on a chosen set of parameters. Because physical files exist, an archival system is added to backup these files (There is no reason to archive. These files should be generated on the fly). This system doesn't include a configuration file, instead it has a hard coded "server picker" function that simply reflects upon the server the code is running upon. A scheduled task is necessary to support and run this service. This boils down to a single problem. The original code performs far too poorly to run in a production environment. Had the performance problem been resolved, the subsystem and subsequently archiving system, "file picker factory function", hard coded failure point and the maintenance of the scheduled task and its added point of failure have no need to exist. This is a "cascading failure" if you will. The original problem led to more bad code, more bad solutions and unnecessary overhead. Is there a formal anti-pattern or general term to describe it?
Lava flow? In computer programming jargon, lava flow is a problem in which computer code written under sub-optimal conditions is put into production and added to while still in a developmental state. From the Perl Design Wiki: Lava Flow is "when code ... spews forth and becomes permanent, it becomes an architectural feature of the archaeological variety. Things are built atop the structure without question and without hope of changing what is beneath them. The existing code is seen as an historical curiosity." Often, putting the system into production results in a need to maintain backward compatibility (as many additional components now depend on it) with the original, incomplete design. Lava flows are often exacerbated by changes in the development team working on a project. As workers cycle in and out of the project, knowledge of the purpose of aspects of the system can be lost, and rather than clean up these pieces, they are worked around, increasing the complexity and mess of the system. Lava flow is considered an anti-pattern, a commonly encountered phenomenon leading to poor design.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/124860", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/11107/" ] }
124,890
What I mean by that is, how do you go about developing on a code base you share with developers who have been working on it for years and are very familiar with it? I don't want to step on anyones toes, but I get not so subtle complaints about the way I do things, whether it be how I whitespace my code, or how frequently I checkin to SVN (too often). So while I can change those things easily -- I want to be a better team developer in general. I'm not sure what to do, other than ask, but maybe you guys have some thoughts I could put to practice. UPDATE There isn't any style guide to speak of -- it's just people aren't used to sharing the codebase. Everyone has their own little siloed code-world. This is a perl shop, but I'm sure these apply to any language UPDATE 2 The CTO who later became CEO was a complete megalomaniac and was the primary source of these complaints. If you didn't do things exactly how he liked, whether it was using a Mac, or Emacs, or 4 tab spaces instead of 2, or dress a certain way, you were inferior. It was a horrific situation that I tried to correct, but the only correct answer for me was leaving. I am convinced that this was an instance of bullying in a workplace, and subsequently, I'm more aware of what might be subtle bullying and inappropriate behavior in a work environment. To any developer looking for answers to a situation like this, leave immediately. You can't teamwork your way out of a bad team situation.
Ask. That is, ask the folks you work with. Do your best to stick to the established style of the existing code. Ask especially if there's a document list of coding standards, and follow it. If there isn't one, write up a first draft based on what you observe in the code and then ask the other team members to critique it. You'll do the company (and new developers who come after you) a service by starting to document the accepted coding practices. The only risk is possibly getting caught in the middle if it turns out that the old-timers don't agree with each other on what is or isn't acceptable. Also, don't be afraid to be yourself . You might be the new guy, but you're a member of the team, and your opinions are valid. If you can think of better ways to do something, suggest it. Respect the other team members and the established way of doing things, but don't let them push you around. The company wouldn't have hired you if it didn't value your input. It'll help a lot if you can find someone on the team that seems friendly and particularly willing to answer questions. (If it's a good team, that should be everyone, but teams aren't always like that.) Your boss may have assigned someone to help get you started. Use that person as a resource. Write down questions as they occur to you, and then ask that helpful person to answer them from time to time. As for checking in code "too often," why not create your own branch for your periodic commits, and then merge back to trunk when your code is ready? There's no harm to anyone else in doing that, and when your coworkers see you getting benefits from SVN that they'd like, they might follow your lead.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/124890", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/34295/" ] }
124,911
I'm a recent college graduate (last May!). While I was still in school, I wanted to make sure that I had a job before I graduated, and very early (probably too early) in my job search I settled on one in a region I'd been hoping to move to after undergrad. However, I've been second guessing this decision for months now, for several reasons. One is that I'm not very challenged at work, and I feel like I haven't improved much at programming since starting here. I can always make time to work on open source (and have in the past) outside of my job, though, so I do have a venue to get around this disappointment. More importantly, I'm worried by the fact that my job is basically to work on a creaky old Perl web application (using Mason and a weird in-house ORM). Am I shooting myself in the foot here by working with a technology that's no longer popular, and won't really help me out getting a job in the future? I rarely see Perl jobs, and when I do, it's usually doing something I'm not interested in (front-end web development stuff). Systems programming, visualisation, network programming, or at least backend web development stuff are the sort of topics that I'd actually enjoy working in -- it doesn't seem like my current work experience is helping me towards positions doing any of these things.
First of all, stop thinking that your job is not bringing your further towards your dream job! Every job does! Everything is only up to you! This is your first job after your graduation and everyone can understand that you didn't have a good choice or might have considered some other factors, like moving to the place where you'd like to stay. This is a valid "excuse", you can mention it if directly asked during a job interview. Every programming job contributes to your experience as a developer. There are many language agnostic things every developer has to learn on his own (implementing loosely coupled architectures, debugging and profiling the code, writing unit tests etc...) that can be learnt in ANY language , and Perl is not the worst one to use. I used to practice all three in VBA developing for MS Access and that was much fun. This is not productive, after all : as long as you consider your current job being boring and useless you won't learn much from it. In many cases your future employer would be interested your "learning curve" -- how quick you are at mastering new technologies, not at how boring you last job was! So, the ways that help you out are the following: Try to master the language you are currently working with as deeply as you can . Watch the perl tag on SO site and try to answer the question people ask there. Read papers on many developer resources etc. Try to become a guru in this area! Perl is a multi-paradigm language, now supporting OOP and many other paradigms. Try to split those from the language and look at them individually. What type of inheritance does Perl have? What are different types of access modifiers are available here for classes and class members etc? Is it strongly typed or not? Many language function a similar way, as long as you know how it works in general you'll easily capture the difference in other languages . Acquire a deep understanding of your current system : why is it implemented in Perl? How are different aspects, like performance, security, reliability solved here? What are unsolved problems, caveats, potential breaches? How would you cope with them? Maybe there there are some reasonable refactoring of the current code base is needed? And don't stay on this job for long if you are dissatisfied with it -- just enough to learn basic skills, to show your willingness to learn and your will-power to overcome the dullness of your tasks! When you apply for your second job in more or less near future you still can be treated as a junior developer! You should try to emphasize what you have learned on your first job, how you coped with your problems of maintaining legacy code and brownfield system, how you managed to extend your horizons and what new cool features you have learned there. Never, never tell during the job interview that you are bored with your current job and that is the reason why you're looking for something else. "Boring" is so subjective and often means that you are just not good enough to stand the challenge of learning the things in your current position and applying them accordingly. Show your willingness to learn, to expand your knowledge and you'll get your dream job, I am sure.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/124911", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/42919/" ] }
124,996
So when I was at university I was educated on the benefits of UML and its future in code development. But from my industry experience, I've found that while we do use diagrams—ranging from ER diagrams, class diagrams, state diagrams, to work flow diagrams—all of it is for communication purposes. That is, I've never generated code automatically from diagrams, and from a communication standpoint, I generally try to keep my diagrams as simple and easy to understand as possible. But when I look at Visio and Enterprise Architect, it seems they have many different types of graphs, shapes, properties objects, most of which I don't use. Do people use UML to do more sophisicated things such as code or database generation?
Yes, UML CASE tools were one of the hot items of the 90s... and then failed to deliver. The fundamental reason for this is that UML (or most any other kinds of) diagrams help to understand the problem and/or the program solving it only insofar the diagram is abstracting away the implementation details of the actual code. Thus (for any nontrivial piece of code), a diagram which is easy to understand is inherently useless for code generation , because it lacks the necessary details. And vice versa, a diagram which is directly usable for code generation doesn't help you much to understand the program any better than code itself. If you have ever seen a UML class diagram reverse engineered from production code, you probably know what I mean. The only potential exception to this I know of is Entity-Relationship diagrams, which don't encompass code per se, only (as their name implies) pieces of data and their relationships. But I have never heard of a successful attempt to use any kind of UML diagrams for real code generation [Update] - i.e. more than class skeletons and trivial code like getters/setters -, except in special purpose tools/areas like ORM, as testified by Doc Brown below [/Update] , and I think this is no accident. I personally do not hate UML - I think that UML diagrams can be a great tool of communication - to show your intent and ideas during design discussions, or to visualize the architecture of your app. But it's best to keep them to this, and not try to use them for things they aren't good in.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/124996", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/3214/" ] }
125,143
I told the company I work for that I want to phase myself out, but that I would stick around for a couple months before applying anywhere to help in the recruitment of my replacement. I offered this because I am the sole web developer and I didn't want to leave them helpless. The problem is, they want to hire someone very under qualified to avoid having to pay a high salary. As far as I'm concerned, it's their company and they can run it how they want. However, when it comes to me helping find and train someone before I go, I'm in a position where I don't know what to do. To give a little perspective, I built them a medium sized e-commerce system using an MVC framework; there's more to it, but I'll leave it at that. The candidates they are finding me to review are people who have never worked as a programmer, have made a couple really crappy static websites using a WYSIWYG program and are calling themselves web designers. I know these people have no chance at success. I have tried to explain it to the company, but they don't want to hear it; they think one of these people can be trained and be up and running at my level in about a year. The reality is, I don't think their site will last a year if they go this route. I think maybe they think I am just trying to make myself look good and the new candidates look bad for some reason, which is not the case at all. I would like to leave what I have worked hard on in capable hands. So what is the ethical and professional thing to do here? Just keep telling them that these candidates are no good until they actually find a decent one, up until its time for me to leave, at which point I leave them with no one? Or just accept that they are going to destroy themselves and do the best I can to pick the best out of the candidates and teach him/her what I can before I go? I really just want to do the right thing here, so I can leave on good terms. And if a year down the road they fail, I can have a clean conscience.
Here's what you do: Let them know that the candidates they are sending you are all unqualified Give them your minimum qualifications Reject anyone who does not meet those qualifications. If they refuse to give you resumes of anyone who meets your qualifications, then you have done your part. Regarding ethics , you don't have a responsibility to replace yourself--that's the hiring manager's job. If you want to go beyond ethics into kindness, then offering to help out is great, but stick to your guns on what the job really requires. Finally, set a deadline for leaving the company , don't languish in this job. They could very well be sending you unqualified candidates so that you will stick around (although that is unlikely). But once you set that deadline, let them know so that the expectations are understood.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/125143", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/1785/" ] }
125,320
I am working on a fairly big project and got the task to do some translations for it. There were tons of labels that haven't been translated and while I was digging through the code I found this little piece of code //TODO translations This made me think about the sense of these comments to yourself (and others?) because I got the feeling that most developers after they get a certain piece of code done and it does what it's supposed to do they never look at this until they have to maintain it or add new functionality. So that this TODO will be lost for a long time. Does it make sense to write this comments or should they be written on a whiteboard/paper/something else where they remain in the focus of developers?
I tend to use // todo comments for things that have to happen, but I can't do immediately. I also make sure that I chase up on them - I search for them (Visual Studio has a nice feature where it will list such comments for you) and ensure that things are done. But, as you say, not everyone is diligent about them and like many comments, they tend to rot over time. I would say this is more of a personal preference - so long as you document what needs to be done and chase up on it, it doesn't matter if it is in // todo , postit notes or a whiteboard (where they can also end up not being actioned).
{ "source": [ "https://softwareengineering.stackexchange.com/questions/125320", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/36984/" ] }
125,357
A few days ago, I was talking to a Software Engineering PhD candidate and at some point she said to me: Keep your classes and methods as small as possible And I wonder if this is always a good practice. I mean for example, is it worthy to have a class with only 2 attibutes in it? For instance, In some methods I need pairs of integers to work with. Should i write a "PairOfIntgers" class? Could this way of thinking "break" the code in too many pieces?
There is a grain of truth in that advice, but IMHO it is not very well expressed, thus easy to misunderstand and/or to take to a stupid extreme. At any rate, this should be a rule of thumb, rather than hard law. And you should always imply in such rules the clause "within reasonable limits" , or "but don't forget to use your common sense" :-) The grain of truth is that in practice, classes and methods always tend to grow by nature, not shrink . A bug fix here, a little feature extension there, handling a special case over there... and voila, your once neat and small class starts to bloat. Over time, your code almost inevitably tends to become a monstrous mess of spaghetti - unless you actively fight this tendency via refactoring . Refactoring almost always produces many smaller classes / methods from a few big ones. But of course, there is a sensible limit to miniaturizing. The point of refactoring is not to have smaller classes and methods per se, but to make your code cleaner, easier to understand and maintain . At a certain point, making your methods / classes smaller starts to decrease readability, rather than increasing it. Aim towards that optimum. It is a blurry and moving target area, so you don't need to hit it spot on. Just improve the code a little whenever you notice some problem with it .
{ "source": [ "https://softwareengineering.stackexchange.com/questions/125357", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/32342/" ] }
125,391
What is the best way to evaluate the best candidates to get a new job (talking merely in terms of programming skills)? In my company we have had a lot of bad experiences with people who have good grades but do not have real programming skills. Their skills are merely like code monkeys, without the ability to analyze the problems and find solutions. More things that I have to note: The education system in my country sucks--really sucks. The people that are good in this kind of job are good because they have talent for it or really try to learn on their own. The university / graduate /post-grad degree doesn't mean necessarily that you know exactly how to do the things. Certifications also mean nothing here because the people in charge of the certification course also don't have skills (or are in low paying jobs). We need really to get the good candidates that are flexible and don't have mechanical thinking (because this type of people by experience have a low performance). We are in a government institution and the people that are candidates don't necessarily come from outside, but we have the possibility to accept or not any candidates until we find the correct one.
Regarding candidate selection, I usually go with a three-strike plan : Regular test with FizzBuzz-like coding questions and many knowledge questions where they have to give coded examples. Depending on the position, it can be OO principles, SQL design principles, etc. I increment the difficulties of questions across the test to see how far they can go. The idea is not really to have all the questions answered (if they do, the better), but also to see if they can acknowledge when they don't know something. Trust is essential, and I don't want to have someone lying to me in my team. Return on the test with the candidate, and discussion around the answers. Possible extension of the questions to reach the candidate's limits. This can be extensive, and the more extensive it is, the better. Last part but not the least, The Code Review . I ask the candidate to bring a piece of code (I generally space the previous test/discussion and this review by a few days, to let them write and polish one piece of code). Then we do a regular code review of it with two people : one person that will directly work with the candidate and the person that reviewed the test with the candidate previously. Regarding the code review you can read this article from JohnFX . At the end of all this, you should be able to decide if you want this candidate to be part of your team or not.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/125391", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/37526/" ] }
125,399
Could Designing by Contract (DbC) be a way to program defensively? Is one way of programming better in some cases than the other?
Design by Contract and defensive programming are in some sense opposites of each other: in DbC, you define contracts between collaborators and you program under the assumption that the collaborators honor their contracts. In defensive programming, you program under the assumption that your collaborators violate their contracts. A real square root routine written in DbC style would state in its contract that you aren't allowed to pass in a negative number and then simply assume that it can never encounter a negative number. A real square root routine written defensively would assume that it is passed a negative number and take appropriate precautions. Note: it is of course possible that in DbC someone else will check the contract. In Eiffel, for example, the contract system would check for a negative number at runtime and throw an appropriate exception. In Spec#, the theorem prover would check for negative numbers at compile time and fail the build, if it can't prove that the routine will never get passed a negative number. The difference is that the programmer doesn't make this check.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/125399", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/43116/" ] }
125,414
Possible Duplicate: How do you deal with an information hoarder? Often, in IT teams knowledge equals power. This is fine as long as (IT-)knowledge is equally accessible by all members of the team and company specific know-how is well documented. Sometimes personnel in an IT department build up a tremendous amount of know-how without documenting it. By doing so they think that they ensure their position. Often these people like it when people specifically have to ask them how to do a job. How do you get these programmers, database administrators or other IT staff to DOCUMENT their work and make it accessible to the company they work for? EDIT: It's a relief that so many of you know this type of person. That it is not me bumping into them everywhere I do projects. On the other hand it makes me sad, such talented people but in many ways behaving as a child. I have seen the behavior in men and women btw. It is hard to pick one answer as the best and accept. Will do after more re-reads.
In my experience, there are two types of people who keep information. I like to think of them as the knowledge gatekeepers and the information hoarders. Knowledge gatekeepers are people who have been with the company for a long while and have experience on many mission critical paths. I find that these people are super busy b/c of that experience. I find that these people need time. They don't document the information they know b/c they don't have the time to do it. They don't sit down and teach others b/c it takes them less time to do it themselves. If this is the case you need to work with them when they have the time. Make sure you write down everything and ask them to double check your understanding either via email or company wiki or something fast that they can look over and leave their feedback. If you are in a position of power, give them some time each week to start documenting things and training others. The information hoarders have a 'saving my job' type of mentality. Getting information from those people is like pulling teeth. Any information that I was able to get out of them, I documented. Any information I was able to figure out on my own, I documented. It's slow going, but in the end you are saving yourself and anyone after you from going through it again.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/125414", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/28570/" ] }
125,576
I understand that C++ is a very fast language, but ain't C just as fast, or faster in some cases? Then you might say that C++ has OOP, but the amount of OOP you need for most programming puzzles is not that big, and in my opinion C would be able handle that. Here's why I am asking this : I am very interested in programming contests and competitions, and I am used to coding in C on those. However, I noticed that the vast majority of people use C++ (e.g., 17 out of 25 finalists on Google Code Jam 2011 used it, while no one used C), so I am wondering if I am at a disadvantage going with C. Apart from the Object Orientation, what makes C++ a more suitable language for programming competitions? What are the features of the language I should learn and use to perform better on the competitions? For background, I consider myself pretty proficient in C, but I am just starting to learn C++.
To start with, there will always be some problems that are better solved in one language than another. There will always be languages that solve specific problems "better" than any other language, for some definition of "better". However, a very very large number of problems have very similar needs (some I/O, some computation) and face similar requirements (reasonable reliability, reasonable performance). As you know C already, for the vast majority of problems out there, I state that C++ provides no significant downsides and a number of significant improvements. Bold? Some people seem to think so, but it's really the case. Let's start out by clearing up a few very common C++ misunderstandings: C++ is slower than C. Wrong! Many C programs are valid C++ programs as well - and such a C program should run at identical speed when compiled with either the C compiler or the C++ compiler. C++ specific features require overhead. Wrong! The so-called overhead introduced by certain C++ specific features (such as virtual function calls or exceptions), is comparable to the overhead you yourself would introduce should you implement a similar feature in C. C++ is object oriented. Wrong! The C++ language contains some language extensions that facilitate object oriented programming and generic programming. C++ does not force object oriented design anywhere - it merely allows it. C allows for object oriented programming as well, C++ only makes it simpler and less error-prone. So, if you believe me, we have established that "C++ is not significantly worse than C". Let's have a look at what makes C++ a better C: Stronger typing The type system in C++ is stronger than in C. This prevents many common programming errors - coupled with the next very important feature, the stronger type system even manages not to be an inconvenience. Parameterized types The template keyword allows the programmer to write generic (type-agnostic) implementations of algorithms. Where in C, one could write a generic list implementation with an element like: struct element_t { struct element_t *next, *prev; void *element; }; C++ allows one to write something like: template <typename T> struct element_t { element_t<T> *next, *prev; T element; }; Not only does the C++ implementation prevent common programmer errors (like putting an element of the wrong type on the list), it also allows better optimization by the compiler! For example, a generic sort implementation is available in both C and C++ - the C routine is defined as: void qsort(void *base, size_t nmemb, size_t size, int(*compar)(const void *, const void *)); whereas the C++ routine is defined as template void sort(RandomAccessIterator first, RandomAccessIterator last); The difference being, that for example sorting an array of integers, would, in the C case, require a function call for every single compare, whereas the C++ implementation would allow the compiler to inline the integer comparison calls, as the actual sort routine is automatically instantiated at compile time by the compiler, with the correct types inserted in the template arguments. A bigger standard library C++ allows the full use of the C standard library. This is very important of course, as the C standard library is an invaluable resource when writing real world programs. However, C++ includes the Standard Template Library. The STL contains a number of useful templates, like the sort routine above. It includes useful common data structures such as lists, maps, sets, etc. Like the sort routine, the other STL routines and data structures are "tailored" to the specific needs the programmer has - all the programmer has to do is fill in the types. Of course, the STL is no silver bullet - but it does provide a great help very often, when solving general problems. How often have you implemented a list in C? How often would an RB-tree have been a better solution, if only you had had the time to do it? With the STL you do not need to make such compromises - use the tree if it's a better fit, it's as easy as using the list. Ok, so I've only been discussing the good parts. Are there any downsides? Of course there are. However, their number is shrinking day by day. Let me explain: There are no good C++ compilers It's been like this for a long time. But you must remember, that the language was standardized in 1998 - it is a complex language, more complex than C. It has taken a long time for compilers to catch up to the standard. But as of this writing, there are good compilers available for the most widely used platforms out there; GCC in versions 3.X are generally very good, and it runs on GNU/Linux and most UNIX platforms. Intel has a good compiler for Win32 - it is also pretty good, but unfortunately it still relies on the MS STL which is sub-par. People don't know good C++ This is not an often heard complaint, but it's something that I see a lot. C++ is a big and complex language - but it also used to be a language that was hyped a lot, especially back in the "OOP solves hunger, cures AIDS and cancer" days. The result seems to be that a lot of really poor C++ code, basically bad C with a few class declarations here and there, is out there and is being used as learning material. This means a lot of people who believe they know C++ actually write really crappy code. That's too bad, and it's a problem, but I think it's unfair to blame this on C++. So, the only two major problems with C++ are results of C++ being a young language. In time they will vanish. And for most problems out there, if you can get good programmers (or learn good C++ yourself), the problems are not really an issue today.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/125576", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/34340/" ] }
125,587
I'm not a native English speaker. In my native language I'm aware of some terms used to refer to the condition checked to stop a recursion, and to the condition checked for extreme, unlikely or super-simple cases. In English, I've encountered the terms "edge case", "corner case", "boundary case" and "base case", but I can't quite figure out the differences and which is used to refer to what; I'd love to get some summary of the differences between them. In particular, I would be very happy if someone could provide annotations for the lines in the following code sample: int transmogrify(int n) { 1. assert(n <= 1000000); 2. if (n < 0) return -1; 3. if (n == 1000000) return PRE_CALC; 4. if (n == 0) return n+1; // For stopping the recursion 5. if (n == 1251) return 3077; return transmogrify(n-1); } I think it's: Sanity check Input check Boundary case? Edge case? Corner case? Base case? Boundary case? Corner case? Edge case?
I'm not a native English speaker either. But according to Wikipedia: Edge case occurs at an extreme (maximum or minimum) operating parameter. Corner case occurs outside of normal operating parameters , specifically when multiple environmental variables or conditions are simultaneously at extreme levels, even though each parameter is within the specified range for that parameter . (The "outside normal operating parameters" obviously means something like "outside typical combination of operating parameters", not strictly "outside allowed operating parameters". That is, you're still within the valid parameter space, but near its corner.) Boundary case occurs when one of inputs is at or just beyond maximum or minimum limits. Base case is where Recursion ends. So, the nomenclature seems to be totally confused, even though corner case seems to mean something a bit different (a combination of values) than edge and boundary cases, which are definitely synonymes. It's probably safe to say that edge, corner, and boundary cases are the same thing in common speech. Someone could mean to say different thing by each of them, but there's hardly any common agreement. Your 1) and 2) are what you wrote, 3) is a edge/boundary case, 4) is a base case, and 5) is a special case.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/125587", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/8331/" ] }
125,710
I have been assigned the task of implementing a Domain Specific Language for a tool that may become quite important for the company. The language is simple but not trivial, it already allows nested loops, string concatenation, etc. and it is practically sure that other constructs will be added as the project advances. I know by experience that writing a lexer/parser by hand -unless the grammar is trivial- is a time consuming and error prone process. So I was left with two options: a parser generator à la yacc or a combinator library like Parsec. The former was good as well but I picked the latter for various reasons, and implemented the solution in a functional language. The result is pretty spectacular to my eyes, the code is very concise, elegant and readable/fluent. I concede it may look a bit weird if you never programmed in anything other than java/c#, but then this would be true of anything not written in java/c#. At some point however, I've been literally attacked by a co-worker. After a quick glance at my screen he declared that the code is uncomprehensible and that I should not reinvent parsing but just use a stack and String.Split like everybody does. He made a lot of noise, and I could not convince him, partially because I've been taken by surprise and had no clear explanation, partially because his opinion was immutable (no pun intended). I even offered to explain him the language, but to no avail. I'm positive the discussion is going to re-surface in front of management, so I'm preparing some solid arguments. These are the first few reasons that come to my mind to avoid a String.Split-based solution: you need lot of ifs to handle special cases and things quickly spiral out of control lots of hardcoded array indexes makes maintenance painful extremely difficult to handle things like a function call as a method argument (ex. add( (add a, b), c) very difficult to provide meaningful error messages in case of syntax errors (very likely to happen) I'm all for simplicity, clarity and avoiding unnecessary smart-cryptic stuff, but I also believe it's a mistake to dumb down every part of the codebase so that even a burger flipper can understand it. It's the same argument I hear for not using interfaces, not adopting separation of concerns, copying-pasting code around, etc. A minimum of technical competence and willingness to learn is required to work on a software project after all. (I won't use this argument as it will probably sound offensive, and starting a war is not going to help anybody) What are your favorite arguments against parsing the Cthulhu way ? * *of course if you can convince me he's right I'll be perfectly happy as well
The critical difference between the two approaches is, that the one he considers to be the only correct way is imperative and yours is declarative. Your approach explicitly declares rules, i.e. the rules of the grammar are (almost) directly encoded in your code, and the parser library automatically transforms raw input into parsed output, while taking care of state and other things that are hard to handle. Your code is written within one single layer of abstraction, which coincides with the problem domain: parsing. It's reasonable to assume parsec's correctness, which means the only room for error here is, that your grammar definition is wrong. But then again you have fully qualified rule objects and they are easily tested in isolation. Also it might be worth noting, that mature parser libraries ship with one important feature: error reporting. Decent error recovery when parsing went wrong is not trivial. As proof, I invoke PHP's parse error, unexpected T_PAAMAYIM_NEKUDOTAYIM :D His approach manipulates strings, explicitly maintains state and lifts up the raw input manually to parsed input. You have to write everything yourself, including error reporting. And when something goes wrong, you are totally lost. The irony consist in that the correctness of a parser written with your approach is relatively easily proven. In his case, it is almost impossible. There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult. C. A. R. Hoare Your approach is the simpler one. All it precludes is for him to broaden his horizon a bit. The result of his approach will always be convoluted, no matter how broad your horizon. To be honest, it sounds to me, that the guy is just an ignorant fool, who is suffering from the blub syndrome , arrogant enough to assume you are wrong and yell at you, if he doesn't understand you. In the end however, the question is: who is going to have to maintain it? If it's you, then it's your call, no matter what anybody says. If it's going to be him, then there's only two possibilities: Find a way to make him understand the parser library or write an imperative parser for him. I suggest you generate it from your parser structure :D
{ "source": [ "https://softwareengineering.stackexchange.com/questions/125710", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/43246/" ] }
125,712
C# seems to be popular these days. I heard that syntactically it is almost the same as Java. Java and C++ have existed for a longer time. For what reasons should I choose C# over Java and C++?
The question should be "Which language is better suited for modern, typical application development?". Edit: I addressed some of the comments below. A small remark: consider that when you have a lot of things natively, as idioms, it's a big difference than implementing or downloading and using them yourself every time. Almost everything can be implemented in any of these languages. The question is - what the languages natively provide you with. So off the top of my head (some arguments apply to both languages)... C# is better than C++ in that: It has native garbage-collection. It allows you to treat class-methods' signatures as free functions (i.e. ignoring the statically typed this pointer argument), and hence create more dynamic and flexible relationships between classes. edit if you don't know what this means, then try assigning a member method returning void and accepting void to a void (*ptr)() variable. C# delegates carry the this pointer with them, but the user doesn't always have to care about that. They can just assign a void() method of any class to any other void() delegate. It has a huge standard library with so much useful stuff that's well-implemented and easy to use. It allows for both managed and native code blocks. Assembly versioning easily remedy DLL hell problems. You can set classes, methods and fields to be assembly-internal (which means they are accessible from anywhere within the DLL they're declared in, but not from other assemblies). C# is better than Java in that: Instead of a lot of noise (EJB, private static class implementations, etc) you get elegant and friendly native constructs such as Properties and Events. You have real generics (not the bad casting joke that Java calls generics), and you can perform reflection on them. It supports native resource-management idioms (the using statement). Java 7 is also going to support this, but C# has had it for a way longer time. It doesn't have checked exceptions :) (debatable whether this is good or bad) It's deeply integrated with Windows, if that's what you want. It has Lambdas and LINQ, therefore supporting a small amount of functional programming. It allows for both generic covariance and contravariance explicitly. It has dynamic variables, if you want them. Better enumeration support, with the yield statement. It allows you to define new value (or non-reference) types. Edit - Addressing comments I didn't say C++ doesn't support native RAII. I said Java doesn't have it (you have to explicitly do a try/finally). C++ has auto pointers which are great for RAII, and (if you know what you're doing) can also substitute garbage-collection. I didn't say anything about emulating free functions. But for example if you need to access a field by a this pointer, and bind the method that does it to a generic function pointer (i.e. not in the same class), then there's simply no native way to do it. In C#, you get the for free. You don't even have to know how it works. By "treating member methods as free functions" I meant that you can't, for example, natively bind a member method to a free function signature, because the member method "secretly" needs the this pointer. The using statement, obviously along with IDisposable wrappers, is a great example of RAII. See this link . Consider that you don't need RAII as much in C# as you do in C++, because you have the GC. For the specific times you do need it, you can explicitly use the using statement. Another little reminder: freeing memory is an expensive procedure. GC have their performance advantage in a lot of cases (especially when you have lots of memory). Memory won't get leaked, and you won't be spending a lot of time on deallocating. What's more, allocation is faster as well, since you don't allocate memory every time, only once in a while. Calling new is simply incrementing a last-object-pointer. "C# is worse in that it has garbage collection". This is indeed subjective, but as I stated at the top, for most modern, typical application development, garbage collection is one hell of an advantage. In C++, your choices are either to manually manage your memory using new and delete , which empirically always leads to errors here and there, or (with C++11) you can use auto pointers natively, but keep in mind that they add lots and lots of noise to the code. So GC still has an edge there. "Generics are way weaker than templates" - I just don't know where you got that from. Templates might have their advantages, but in my experience constraints, generic parameter type-checking, contravariance and covariance are much stronger and elegant tools. The strength in templates is that they let you play with the language a bit, which might be cool, but also causes lots of headaches when you want to debug something. So all in all, templates have their nice features, but I find generics more practical and clean.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/125712", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/35740/" ] }
125,783
I have my mind set that I'm learning Java as my second language (I am a Perl developer). But just from reading a little of the online tutorials, I really can't find any similarities between the two. What will I have an easy time understanding when learning Java?
Perl is a high-level, general-purpose, multi-paradigm, interpreted, dynamic programming language. Java is a high-level, general-purpose, mostly single-paradigm, statically typed programming language. So, both are high-level : A high-level programming language is a programming language with strong abstraction from the details of the computer. and general-purpose : In computer software a general-purpose programming language (GPL) is a programming language designed to be used for writing software in a wide variety of application domains. In essense this means that everything you can do with Perl, you can also do with Java. And as @KyleHodgson mentions , both their syntaxes derive from C and C++ and the syntax for simple stuff like for loops and if statements is essentially the same - and as @DipanMehta notes , both are garbage collected . And of course both are extremely popular and have vibrant communities. But that's where the similarities stop. Perl is multi-paradigm, supporting a wide range of programming paradigms: Functional programming , Imperative programming , Class based object-oriented programming , Reflective programming , Procedural programming , and Generic programming Perl does not encourage one single paradigm, they are essentially equal and you are free to choose whichever you think best suits whatever you're building, without of course limiting you to a single paradigm, you can mix and match. On the other hand, Java is mostly a class based object oriented language. There is support for generic programming, but as a beginner you should think of Java as strictly a class based object oriented language. So Perl allows more than one ways to structure your code 1 , whereas Java just one. That's not a bad thing (or a good thing), it's just different. If you haven't written any object oriented Perl code, Java might seem a little bit alien at first. Don't be discouraged, object orientation is something that you will eventually have to learn, if you are considering a career in software development, and learning Java is a good way of learning the basic concepts of object orientation - not a perfect way, but definately a good way. And as you know, Perl is interpreted, whereas Java is... well... a different beast entirely . In Java you write your code as you would in Perl, and then you compile it . The result is not an executable, but Java bytecode . This intermediatery format is executed (finally!) in the Java Virtual Machine , which is somewhat analogous to the Perl interpreter. A JVM must be installed beforehand for a Java program to run, simirarly on how you need to install a Perl interpreter to execute a Perl script 2 . Coming from a Perl background, the most important thing to remember is the compilation to bytecode step: Every time you make a change to a Java source file you need to re-compile it. It might sound crazy at first, but compilation has a very nice consequence: Your code is checked for a variety of errors at this stage, and the compiler refuses to finish the process if there are any, and sometimes will help you pinpoint the errors with helpful messages (There are always messages, but only sometimes they are helpful). Which brings us to the last major difference: Perl is dynamic 3 : Dynamic programming language is a term used broadly in computer science to describe a class of high-level programming languages that execute at runtime many common behaviors that other languages might perform during compilation, if at all. These behaviors could include extension of the program, by adding new code, by extending objects and definitions, or by modifying the type system, all during program execution. and dynamically typed 4 : A programming language is said to be dynamically typed when the majority of its type checking is performed at run-time as opposed to at compile-time. and Java is statically typed : A programming language is said to use static typing when type checking is performed during compile-time as opposed to run-time. Which, to put it as simply as possible, means that in Java you have to declare the type of your variables and methods before using them. There are other differences, but I wouldn't want to spoil the fun of discovering them by yourself :) And, finally, there is one very important difference: Java is the sweetheart language of the academia 5 and the corporate world, while you will rarely meet Perl in an academic setting anymore (where I first met her), and its career perspectives are shrinking (still quite a few jobs, but nowhere near as many as Java, .Net languages or PHP). I will not comment on the reasons, I'm just stating the (sad) facts. Since you are still very young, by learning Java you will be a little bit more prepared for a Computer Science degree, if you chose to follow that path. Don't give up on Perl, of course, but do explore Java. The fact that they are more different than similar also means that you will learn quite different approaches and programming mentallities, it's a hard path but one that will ultimately make you a better programmer. 1 "Tim Toady" 2 The Perl community is actively exploring the possibillity of a Perl virtual machine, via Parrot . 3 Dynamic doesn't always mean dynamically typed. 4 Perl is dynamically typed for user defined types, statically typed with respect to distinguishing arrays, hashes, scalars, and subroutines, and strongly typed via use strict , so essentially it's a variable type system language, but to keep some sense of sanity let's call it dynamically typed. 5 To the point of abuse, as Joel Spolsky writes in The Perils of JavaSchools .
{ "source": [ "https://softwareengineering.stackexchange.com/questions/125783", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/34364/" ] }
125,789
There's an open-source project that I'm interested in and use regularly. It's licensed under the Apache License 2.0 and it has basically no activity any more. It's hosted on Google Code and I'm interested in continuing it's development. I'm new to the open-source process and I'm trying to figure out the appropriate way to go about this. Can I just check it out and push it to github so I can continue it's development in the open there? Should I contact the project "owner" first? Also, do I leave all the author information at the top of the classes, etc even though I'm going to be making changes..(I'm assuming the answer is yes)? Also, how do I practically adhere to the license requirement of "all modifications are clearly marked as being the work of the modifier"? Do I place a comment by every change I make? Any guidance on what's the normal course/standard here would be greatly appreciated?
Recently, I took over an open-source project. The steps that I followed are: Contact the original author Let him/her know my intentions Get acknowledged by him/her (you will either get the rights to the original repository or you will get to clone it) Retain original authorship (will be adding myself when I make further changes) By "Retain original authorship"... I mean to credit the original author above myself in all cases as it is originally his/her work.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/125789", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/24148/" ] }
125,796
I'd like to note your experience of full scale IDEs on Linux. I personally work mostly with vim , however other programmers would like to see a real IDE. So I'd like to hear your personal opinion about different IDEs and comparison between them, in following context: C++ and maybe some C development (not Java, Python and other stuff) Server side programming (no need for GUI development) Working on Linux not "cross-platform" development. Opinion needed regard: How stable is it? If IDE crashes I don't need it. Comfortable to use. Powerful for debugging. Integration with various build systems Scalability over huge projects (projects of hundreds thousands of lines of code) I used to work with KDevelop and it was very nice IDE and seems that KDevelop 4 is a huge progress. It seems also that many rather use Netbeans and Eclipse.
Here is my personal experience with IDEs. I installed all IDEs I could find, and played with them all (that is what I would advise you to do) : kdevelop I personally use it. The version I have installed crashes, but I downloaded the latest version from their site, and it works good. It is simple to configure and great to use. They support custom build system through plug-ins. You might find some weird features (like parsing only directly included headers), but generally it works good for big projects. eclipse Super complex to configure, but it allows literally everything. If you have enough time to find a correct configuration that pleases everyone, then go for it. But trying to change anything is very annoying because it has so many options. anjuta and codeblocks I tried it shortly, and it wasn't as good as the previous two. Codeblocks is good for short projects, but not for medium and big. netbeans Another good IDE, but since my home is on network share, and the project I work is fairly big, it was very slow. It parses all the time. qtcreator Simple to configure, but it is missing lots of options. For example, the strangest thing with it is that it can not parse and auto complete qt classes. Supports custom build system. To conclude : if you are patient enough (or if you find a good configuration), go with eclipse. It is really the best free IDE. If you want something simple to configure, go with kdevelop. Another option is to install both, and let your developers pick what suits them better.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/125796", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/35048/" ] }
125,797
Is it acceptable to have a dummy generic parameter in the parameters list in order to save method consumers from the need to specify type arguments? For example - public T Generate<T>(int paramForInstance) where T : MyClass { return MyClass.SomeFactoryMethod<T>(paramForInstance); } //calling code needs to specify type MyDerivedClass instance = Generate<MyDerivedClass>(5); public T Generate<T>(int paramForInstance, T dummy) where T : MyClass { return MyClass.SomeFactoryMethod<T>(paramForInstance); } //calling code needs to provide a "sample" to enable type inference MyDerivedClass instance = null; instance = Generate(5,instance); The assumption here is that if junior other devs on the team don't need to think about generics, they're less likely to kick. A Little Background (feel free to skip this) A couple weeks ago, a management-developer type in my team freaked out when he saw code (my code) that looked like this: //conversion methods, using generics private static TResult? IfNotDangerous<TResult> (SomeType firstThing, string name, Func<object, TResult?> conversion) where TResult : struct { return IfNotDangerous(firstThing, name, (any, dummy) => conversion(any), (TResult?)null); } private static TResult IfNotDangerous<TResult> (SomeType firstThing, string name, Func<object, TResult, TResult> conversion, TResult defaultValue) { if (firstThing == null) return defaultValue; if (!firstThing.Contains(name)) return defaultValue; if (firstThing[name] == DangerousValue) return defaultValue; return conversion(firstThing[name], defaultValue); } The complaint he had was, "Why are you using a struct?! That's so resources-heavy! And what does 'where' mean? Nobody on the senior development end knows what that does - I've asked them all!" After overcoming my self-pity, I found a way around that didn't use generics for this particular method. Appeasement was achieved. But I'd like to avoid trouble in the future, if possible. EDIT I appreciate all the encouragement, but the question was not "should I look elsewhere for work?" - it was "is this a usable workaround?"
Sad to see programmers that do not move along with the times. The fact is that generics have been part of C# and .NET since version 2 and LINQ since version 3. These are no longer cutting edge new techniques. Instead of "Protecting" them you should be teaching them. Start a weekly tutorial (over lunch if you don't have buy-in) for all developers who are interested in staying current. The fact is that if they bring in a new senior, chances are they would also be using generics and LINQ. Keeping these "hidden" or not using them is a disservice to the other developers and the company. If you are told to not use generics and LINQ at all, I would sadly conclude it is time to look for a job where developers don't cripple themselves. Update: Since you don't seem to think the above answers the question "is this a usable workaround?" I will answer now: No it isn't. It is an abuse of generics and has too much "magic" - it is even less understandable than the straight generic code and if any of your colleagues will have to delve into it, they will not be able to figure it out. The fact that it works and does what you want it to (hide the use of generic) is besides the point. It is opaque and even the usage is not idiomatic (giving a "prototype" to the function for it to work).
{ "source": [ "https://softwareengineering.stackexchange.com/questions/125797", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/40462/" ] }
125,822
I'm a university student and I've just started learning about design patterns and im struggling to understand the purpose of them. I have tried researching them but all the resources I have found seem to talk about them in an academic, not professional, way. What is their purpose and are they important to learn?
Design patterns are great for communicating your intent very quickly- everyone knows what a Factory is. What's a really, really, really bad thing to do is start trying to fit your code to patterns, or separate responsibilities according to patterns, or something like that. It's one thing to say "This object is a Factory" and another to say "This object should be exclusively a Factory".
{ "source": [ "https://softwareengineering.stackexchange.com/questions/125822", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/35017/" ] }
125,836
I've been looking for various licenses that I can use for an open-source project of mine, but all of the projects that I've seen, with all kinds of licenses, appear to have a giant, obnoxious (in my opinion) notice in each source file that states that the file is listed under a certain license. I don't think that I've found a single source project that isn't public domain that doesn't have a notice like that. This just seems like a waste of time and file space. I plan on putting @license and @author tags in my projects, but I don't see why I need to list such a giant notice in each individual file if I don't want to make my code public domain. Is there any reason why I would want to include such a notice in my projects, or would simply including a notice in the README and a @license tag be good enough? Does this affect the "clearly stated" rule of most licenses, or is it just overkill so that people won't argue?
I've seen many projects which only mention the license in the README or in a LICENSE or COPYING file. Your software is automatically covered under copyright, as agreed in international law. (Unless you are working for the US government or some other organization for which copyright does not apply.) If someone uses your software then they must make sure to follow the license agreement, or follow the fair use restrictions on what they can do. Suppose that person wants to use one of the files in your code distribution, which of course requires a copy and hence copyright law applies. By default they do NOT have the right to use your software under copyright law. It's only when they know and follow the license restrictions that they are allowed to use it. So if they use a file without a software license then they are breaking copyright law. Since all the licenses say something like "The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software", they are obliged to put that license somewhere. That can be in the file itself, or when I've used code as library I put the relevant portions into its own directory and added a "README" or "LICENSE" into that subdirectory. In short, you don't need to put the license in each file. I think it's overkill. There's no extra legal protection in doing so. It does help a downstream user somewhat, but not by much. I think the tradition of lots of comment-based metadata (license, creation date of each function, changelog, etc.) are very old traditions which exist because they are easy to do and which more a talisman than useful. For example, the default Eclipse template adds what I think of as useless metadata before each function, which I think is much better captured by version control. But that practice is common in many shops.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/125836", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/36436/" ] }
125,838
A lot of bad practices were being committed at the beginning of a project, and I recognized them and fought against all of them. Since I didn't pick and choose my battles, my boss now assumes anything out of my mouth is an over complicated response, and spends a lot of time defending himself personally, instead of looking out for the best interest of the project. How do I push the team in the right direction, without waiting four months for everyone to come to the consensus I was trying to get in place all along, or scarring my reputation on the team as a difficult know-it-all?
Change takes time. Udi Dahan has an excellent article that touches on what you are asking, and I think he does a better job with the answer than I would. Be enthusiastic, not bitter. Be prepared to carefully, cheerfully explain your position far more often than you'd like. Count your wins, and be prepared for others resisting your ideas. Always keep in mind, that other people's view points, while not your own, may in fact still be right. In time, you can achieve your goals if you are prepared to work with people. I wonder if you had a little bit of a twinge when you wrote "since I don't pick and choose my battles" ... it seems like a bit of a red flag to me. Getting a few early, easy wins can set you up not as "that jerk that thinks he's smarter than everyone else" but "that guy that had that great idea last month".
{ "source": [ "https://softwareengineering.stackexchange.com/questions/125838", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/40984/" ] }
125,857
I have a conceptual question about 'translating' between objects I have stored in Django (in Postgres) that I want to use on the front-end. So I have a user object in Python that holds basic things: an id, a username, a password. Let's say I want to output all of that information using Javascript on the client side. Can I just give JS a python object and it will make it into a JS object? I know JS is known for being 'fast and loose' with typing but that just seems absurd! I'm not looking for syntax, but just an overview of how this would work. Can I put Python objects in AJAX requests, then translate them client side, or is this going to be a basic break-down-things-on-server-side, ship them off, and recombine them on the client side type of thing?
Change takes time. Udi Dahan has an excellent article that touches on what you are asking, and I think he does a better job with the answer than I would. Be enthusiastic, not bitter. Be prepared to carefully, cheerfully explain your position far more often than you'd like. Count your wins, and be prepared for others resisting your ideas. Always keep in mind, that other people's view points, while not your own, may in fact still be right. In time, you can achieve your goals if you are prepared to work with people. I wonder if you had a little bit of a twinge when you wrote "since I don't pick and choose my battles" ... it seems like a bit of a red flag to me. Getting a few early, easy wins can set you up not as "that jerk that thinks he's smarter than everyone else" but "that guy that had that great idea last month".
{ "source": [ "https://softwareengineering.stackexchange.com/questions/125857", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/43300/" ] }
125,903
I am from a country in which the use of the word "Engineer" or "Engineering" is strictly regulated by legislation. This means that you must hold a degree in Engineering AND passed an exam to join the Engineers' professional association to use the title. I am a bit confused about the term "Software Engineer" as used in the industry, globally. I read the wikipedia page on Software Engineer and the linked section of the page about the term usage, which offer an in-depth discussion. However my question is somehow more concrete. Knowing that I don't hold a degree in engineering (yet, I like to think I produce funcional, well designed software!), when I see an opening for a software engineer from an international company, would it be appropriate for me to apply? Should I somehow emphasise that I don't hold a degree in engineering?
You are not going to waste anyone's time. Go for it. And you don't even need to emphasize that you don't hold a degree in engineering. Your CV (Resume) will obviously state what degrees you hold, and by inference what you do not hold. Only avoid companies that specifically state that they are only looking for accredited etc etc. EDIT: The reason for this is that computer software development has been, still is, and will continue for a while to be an explosively growing, industry-led field, where 99% of "what the job is all about" is learned at the workplace, not at the University. The University is good for learning to specialize on a specific subject by means of a Master's or higher degree, and when a company is looking for a specialist they usually state this requirement. This comes from someone who holds a "Bachelor's Degree in Computer Science" and who nevertheless learned that what he is doing is in fact a science outside of the University. (Initially in highschool, when I learned what binary search is, and later at work, when I learned what OOP was. At the University they had not heard of OOP yet.)
{ "source": [ "https://softwareengineering.stackexchange.com/questions/125903", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/29074/" ] }
125,908
I am working on a very small project and my design would be using technologies such as JSP, servlets, and POJOs . I'm considering a workflow where a JSP page will receive input data, then submit it to a servlet. The servlet would then have a lot of helper POJOs in it that would do all the real work. One thing I've noticed with this approach very early on is that it grew into a huge number of if-then-else statements. Is there a better way to accomplish what I'm trying to do without having to resort to a framework?
You are not going to waste anyone's time. Go for it. And you don't even need to emphasize that you don't hold a degree in engineering. Your CV (Resume) will obviously state what degrees you hold, and by inference what you do not hold. Only avoid companies that specifically state that they are only looking for accredited etc etc. EDIT: The reason for this is that computer software development has been, still is, and will continue for a while to be an explosively growing, industry-led field, where 99% of "what the job is all about" is learned at the workplace, not at the University. The University is good for learning to specialize on a specific subject by means of a Master's or higher degree, and when a company is looking for a specialist they usually state this requirement. This comes from someone who holds a "Bachelor's Degree in Computer Science" and who nevertheless learned that what he is doing is in fact a science outside of the University. (Initially in highschool, when I learned what binary search is, and later at work, when I learned what OOP was. At the University they had not heard of OOP yet.)
{ "source": [ "https://softwareengineering.stackexchange.com/questions/125908", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/39101/" ] }
125,926
I have to document my program for a school project and we have section called "problem domain" but I have no idea what to discuss in this section. So the question is: What should be discussed in the problem domain?
I write embedded software for telecommunications equipment. My problem domain is ethernet, voice, and video protocols. In other words, all the stuff that has nothing to do with the language I'm programming in, but that I still must understand in order to write the software. If you're making a website for selling photography services, the problem domain is photography and ecommerce. If you write firmware for military aircraft, the problem domain is weapons, sensors, and control systems. Get the picture?
{ "source": [ "https://softwareengineering.stackexchange.com/questions/125926", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/30709/" ] }
125,966
Let's say we're going from 1 to 5. The shortest route will be 1-4-3-5 (total: 60 km). We can use Dijkstra's algorithm to do that. Now the problem is, the shortest route is not always the fastest one, because of traffic jams or other factors. For example: 1-2 is known to have frequent traffic jams, so it should be avoided. Suddenly a car accident happens along 4-3, so it should be avoided too. Etc... So probably we can speed on the route 1-4-5, because of no traffic jams/accidents, so will arrive at 5 faster. Well that's the general idea, and I haven't think about more details yet. Is there any algorithm to solve this problem?
Yes: Dijkstra Dijkstra works just as well for this situation. You just use time rather than distance as the weight of each arc.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/125966", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/43337/" ] }
126,095
I'm a student, fresh into programming and loving it, from Java to C++ and down to C. I moved backwards to the barebones and thought to go further down to Assembly. But, to my surprise, a lot of people said it's not as fast as C and there is no use. They suggested learning either how to program a kernel or writing a C compiler. My dream is to learn to program in binary (machine code) or maybe program bare metal (program micro-controller physically) or write bios or boot loaders or something of that nature. The only possible thing I heard after so much research is that a hex editor is the closest thing to machine language I could find in this age and era. Are there other things I'm unaware of? Are there any resources to learn to program in machine code? Preferably on a 8-bit micro-controller/microprocessor. This question is similar to mine, but I'm interested in practical learning first and then understanding the theory.
People don't program in machine code (unless they are masochistic). They use (or develop) tools to generate machine code (compiler or assembler, including cross-development tools), or perhaps libraries generating machine code (LLVM, libjit, GNU lightning, ....). So resources about machine code generation, compilation, optimizers, and micro-architectures are also relevant. And very often, a good optimizing compiler generates better machine code than you could do. You'll probably don't be able to write a 200 line assembler code better than a good optimizer. If you want to understand machine code, learn assembly first. It is very close to machine code. Use it wisely, only for things you cannot code in C (or in some higher-level language, like Ocaml, Haskell, Common Lisp, Scala). A good way is often to use asm instructions (notably GCC extended assembly feature) inside a C function. Reading the assembly code (generated by gcc -S -O2 -fverbose-asm ) can also be helpful. The Linux Assembly HowTo is a good thing to read. Current processor's instruction set architecture (i.e. the set of instructions understood by the chip) are quite complex. Common ones are x86 (a typical PC in 32 bits mode), X86-64 (a desktop PC in 64 bits mode), ARM (smartphones, ...), PowerPC etc. They are all quite complex (because of historical and economical reasons). Perhaps learning first an hypothetical instruction set like e.g. Knuth's MMIX is simpler.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/126095", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/43388/" ] }
126,136
I got through the first phone interview at Amazon last week and they asked some basic technical questions during that interview. Today, I had my second phone-interview and I prepared myself well expecting for another technical interview, but it was not technical at all. After we introduced ourselves, he asked questions like: Why do you like to do programming? What do you Not like about programming? What do you expect from a new job? In the last project you worked on, how did you make sure the project was implemented to the client's requirements? Was there was any project, that during the development, you felt like the design was bad, and if so, what did you do to fix the issue? As an engineer how would you keep up with the new technologies? Additionally, when I explained to him about a recent project I worked on, he asked me about why we decided to make a specific design decision and whether it was my decision. I was wondering why he asked me these kind of questions; since English is not my native language, I assume that one of the reasons was to test my knowledge of the language. And most of the answers I gave him were very short and since I felt like he's expecting more, I tried to come up with things to say and ended up just blabbering. Overall I think the interview went really bad because I couldn't clearly convey my points to him. Why was I asked these types of questions, and what kind of answers do interviewers expect?
It is open-ended questions. They are tailored to see how easily you can describe your views on your practice. The main objective of those questions is to make you talk, not to test your English skill (even if communication skills can be tested this way), but to see if you are passionate ( Why do you like to do programming? Details about your latest project? ) about what you are doing, and if you feel invested in your practice. They are also asked to see if you can take a distance, and judge yourself in your practice, that you know your weak points.( What do you not like about programming? ) There are also some questions that can be considered as BS-detecting questions( Details about your latest project? ). That's because the last thing someone wants in a team, is someone lying, so you have to get in the details of what you are pretending you've done. Then, there are also the questions about your evolution as a competent programmer( How do you keep up with the new technologies? ), and your capacity to constantly evolve without being constantly asked to. Overall, those questions are generally asked to make a connection, and see if you are a good fit for the company and it's culture. It's totally subjective. The goal is to see if communication is easy, and ideas can be easily shared. If you feel that you did bad because that connection didn't happen, well maybe it's better that you pass to the next company.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/126136", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/43401/" ] }
126,175
I've been running a company for nearly 10 years, and all this time it's been just myself, another programmer (who's a great friend and cofounder), and a salesman (who's also a friend). Together, we've managed to do decent business and we've all managed to make a living, but we've been trying to expand for a long time now. Unfortunately, there are a few problems: The technology we use is not obsolete, but it's also not as popular compared to other web development options like PHP We work in a competitive market, competing against multi-million dollar companies We can't afford to pay new programmers or salesmen very well. We make enough money for ourselves, but unless we got a significant number of new customers we wouldn't be able to pay much. Because of the fact we can't pay much, we use a slightly more rare technology, and we compete against large companies, we find it difficult to find new programmers or salesman. We desperately need to expand, but when we try to get more customers, we can't support them with so few people (or their demands grow outside of our range of expertise) and when we try to hire new programmers and salesman, we usually don't get a high quality and they usually don't provide a major benefit for our company. Does anyone have some suggestions or tips on how we could expand?
I'll start with the hard truth: If your business model only works as long as you can get an expensive resource (developer talent) for a price lower than the market price, then you don't have a business model. The fact that you're competing against larger companies isn't an excuse. In the development field, larger organizations typically have higher costs per "development unit" than smaller ones ( Diseconomy of scale ). So you should be able to offer your programmers a higher salary than those larger companies, where every developer has to "pull" for one or two managers, secretaries, HR people and the likes. That said, I think the best thing you can do in the short run is to hire programmers with little or no experience. Think high-school graduate who liked to play around with Python in his free time. The implicit deal would be: They work for a low salary and in turn you teach them professional programming, good practices, how to deal with customers and so on.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/126175", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/39255/" ] }
126,208
I am not sure how to specify my datatype as datetime or timestamp, I think I will need both of them but on the different events. My website sells products and services worldwide and also have an account system for user to login. Please clarify the following: Current date and time customer buys products from my website: Datetime? Delivery date and time based on their location and date/time purchased, calculated from our system: Datetime? Last time they logged-in on their account: Timestamp? What is the code in php to store purchasing date (current date) and then store in the database? Please elaborate how to use each of them as after reading plenty of explanation on the internet, I still don't quite get it.
MySQL timestamps : Are stored in UTC They are converted to UTC on storage and converted back to your time zone on retrieval. If you change time zone settings , the retrieved values also change. Can be automatically initialised and updated You can set their default value and / or auto update value to CURRENT_TIMESTAMP Have a range of 1970-01-01 00:00:01 UTC to 2038-01-19 03:14:07 UTC Whereas MySQL datetime : What you store is what you get ™. Have a range of 1000-01-01 00:00:00 to 9999-12-31 23:59:59 Values outside the range may work - but only values within the range are guaranteed to work. You can store dates where day or month is zero. This is the MySQL way of storing birthdays! You can't do that with a TIMESTAMP , with the only exception being the zero value of 0000-00-00 . You can store invalid dates, if you need them, in ALLOW_INVALID_DATES mode. You can set default value to NOW() in some instances, but the preferable and more natural way of automatically initialised and updated dates is TIMESTAMP . And of course there are also DATE and TIME , which work just like DATETIME , but of course DATE doesn't care for time and TIME doesn't care for date. All four data types work perfectly with the wide array of date and time functions , but when you are mixing data types you should be aware of conversion effects . Now, to your question: You should use DATETIME everywhere. Not for technical reasons, but since you are still unclear on how MySQL works, DATETIME is the simpler choice. This will mean that you will have to calculate the current timestamp in PHP before storing, that's as easy as: $mysqldate = date("Y-m-d H:i:s"); PHP's date function works like: string date ( string $format [, int $timestamp = time() ] ) The "Y-m-d H:i:s" format is the one compatible with MySQL and by leaving the second parameter empty, date() calls time() to get the current UNIX timestamp . Using a TIMESTAMP instead of a DATETIME has the added value of MySQL taking over the responsibility of deciding on the current timestamp, and you can skip the field when you are inserting / updating. But since you are already sending a query, and the code to get the current timestamp in PHP is minimal, you can safely go with DATETIME for everything. As for the actual code to store the PHP timestamp into the database, you should look at PHP Data Objects , and if you are still unclear, ask on StackOverflow instead. But there are almost 1.5k related questions already, make sure you go through them before asking. Just a hint, prepared statements is how the cool kids do it.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/126208", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/39652/" ] }
126,513
We got a request from one of our customers, and since I've never encountered such a request before, I don't even know where to start looking. Our customer is a network of colleges, and we're building them a site. On that site, among other things, will be a form which potential applicants can fill out in order to get more info about the studies in one of the colleges. After a user fills out this form - including giving their email and/or phone number - the relevant college then contacts them with the relevant info. Now, for legal reasons, the client is asking that this form also have a checkbox that the potential applicant checks to indicate that they agree to get promotional material from these colleges. That's of course no problem. But here comes the weird part of the request: The details of each filled out form have to be be saved in a reliable method. Apparently, saving the form values in a database - a column indicating if the user agreed or didn't agree - isn't enough, because a DB could be changed after the users has sent the form. Our client claims that other college portals create a screenshot of the filled-out form, and save it somewhere in a dedicated folder, in a manner that is easily found, such as giving the file a name that includes the user's name, and the date and time. My question is as follows: have you heard of using screenshots as a method of proving a user has actually filled out a form? Are there other methods that are are considered reliable?
I have never heard of something like that, and it would be ridiculous, because a fake screenshot can be produced just as easily as a fake value in a database. EDIT Besides, I mean, WTF? since you cannot get a screenshot of someone's screen over the web, you will obviously have to reconstruct the page on the server and take a screenshot of that, and then who's to say you did not doctor it?
{ "source": [ "https://softwareengineering.stackexchange.com/questions/126513", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/2964/" ] }
126,520
List.ForEach(Console.WriteLine); List.ForEach(s => Console.WriteLine(s)); To me, the difference is purely cosmetic, but are there any subtle reasons why one might be preferred over the other?
Looking at the compiled code through ILSpy, there actually is a difference in the two references. For a simplistic program like this: namespace ScratchLambda { using System; using System.Collections.Generic; using System.Linq; using System.Text; internal class Program { private static void Main(string[] args) { var list = Enumerable.Range(1, 10).ToList(); ExplicitLambda(list); ImplicitLambda(list); } private static void ImplicitLambda(List<int> list) { list.ForEach(Console.WriteLine); } private static void ExplicitLambda(List<int> list) { list.ForEach(s => Console.WriteLine(s)); } } } ILSpy decompiles it as: using System; using System.Collections.Generic; using System.Linq; namespace ScratchLambda { internal class Program { private static void Main(string[] args) { List<int> list = Enumerable.Range(1, 10).ToList<int>(); Program.ExplicitLambda(list); Program.ImplicitLambda(list); } private static void ImplicitLambda(List<int> list) { list.ForEach(new Action<int>(Console.WriteLine)); } private static void ExplicitLambda(List<int> list) { list.ForEach(delegate(int s) { Console.WriteLine(s); } ); } } } If you look at the IL call stack for both, the Explicit implementation has a lot more calls (and creates a generated method): .method private hidebysig static void ExplicitLambda ( class [mscorlib]System.Collections.Generic.List`1<int32> list ) cil managed { // Method begins at RVA 0x2093 // Code size 36 (0x24) .maxstack 8 IL_0000: ldarg.0 IL_0001: ldsfld class [mscorlib]System.Action`1<int32> ScratchLambda.Program::'CS$<>9__CachedAnonymousMethodDelegate1' IL_0006: brtrue.s IL_0019 IL_0008: ldnull IL_0009: ldftn void ScratchLambda.Program::'<ExplicitLambda>b__0'(int32) IL_000f: newobj instance void class [mscorlib]System.Action`1<int32>::.ctor(object, native int) IL_0014: stsfld class [mscorlib]System.Action`1<int32> ScratchLambda.Program::'CS$<>9__CachedAnonymousMethodDelegate1' IL_0019: ldsfld class [mscorlib]System.Action`1<int32> ScratchLambda.Program::'CS$<>9__CachedAnonymousMethodDelegate1' IL_001e: callvirt instance void class [mscorlib]System.Collections.Generic.List`1<int32>::ForEach(class [mscorlib]System.Action`1<!0>) IL_0023: ret } // end of method Program::ExplicitLambda .method private hidebysig static void '<ExplicitLambda>b__0' ( int32 s ) cil managed { .custom instance void [mscorlib]System.Runtime.CompilerServices.CompilerGeneratedAttribute::.ctor() = ( 01 00 00 00 ) // Method begins at RVA 0x208b // Code size 7 (0x7) .maxstack 8 IL_0000: ldarg.0 IL_0001: call void [mscorlib]System.Console::WriteLine(int32) IL_0006: ret } // end of method Program::'<ExplicitLambda>b__0' while the Implicit implementation is more concise: .method private hidebysig static void ImplicitLambda ( class [mscorlib]System.Collections.Generic.List`1<int32> list ) cil managed { // Method begins at RVA 0x2077 // Code size 19 (0x13) .maxstack 8 IL_0000: ldarg.0 IL_0001: ldnull IL_0002: ldftn void [mscorlib]System.Console::WriteLine(int32) IL_0008: newobj instance void class [mscorlib]System.Action`1<int32>::.ctor(object, native int) IL_000d: callvirt instance void class [mscorlib]System.Collections.Generic.List`1<int32>::ForEach(class [mscorlib]System.Action`1<!0>) IL_0012: ret } // end of method Program::ImplicitLambda
{ "source": [ "https://softwareengineering.stackexchange.com/questions/126520", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/4285/" ] }
126,545
I am quite a beginner in code testing, and was an assert whore before. One thing worrying me in unit testing is that is often requires you to make public (or at least internal ) fields that would have been private otherwise, to un- readonly them, make private methods protected virtual instead, etc... I recently discovered that you can avoid this by using things like the PrivateObject class to acces anything in an object via reflection. But this makes your tests less maintainable (things will fail at execution rather than compile time, it'll be broken by a simple rename, it's harder to debug...). What is your opinion on this ? What are the best practices in unit testing concerning access restriction ? edit : consider for instance that you have a class with a cache in a file on disk, and in your tests you want to write to memory instead.
You should never-ever have to make public (or at least internal ) fields that would have been private otherwise, to un- readonly them, make private methods protected virtual instead Making a private member non-private turns the object into a Leaky Abstraction ( wikipedia ) which is the cause of much weeping and wailing and gnashing of teeth . ( wiktionary ) Un-readonlying a field can make an immutable object mutable, which is nothing short of a disaster. Your test should consider your object-under-test as a Black Box ( Wikipedia ), meaning that it should only be concerned with the public interface of the object-under-test, not with details of its implementation. If an object cannot be sufficiently tested via its public interface, then you need to find ways to provide formal and useful extensions to its interface that would facilitate testing. For example, a registration system which only exposes a pair of Register() / Deregister() methods is what I call a Black-Hole Interface because information can only enter it but never leave. To fix this: The solution is most definitely NOT to expose its internal collection of registrants, so that the test can go peeking into that collection. The solution is to add an IsRegistered() method, even it will never be invoked by the application at hand; still, it is a formal and useful extension to the interface, it prevents the interface from being a black-hole interface, and incidentally, it accommodates Black-Box Testing. The important thing is that changing the implementation of the object should not require you to change the unit test. In principle you should be able to completely rewrite the module under test, and reuse the existing test to make sure that the new version works exactly as the old version. This is only possible if the test is strictly a black-box test. Amendment 2022-11-02 for further reading: michael.gr - White-Box vs. Black-Box Testing michael.gr - Incremental Integration Testing
{ "source": [ "https://softwareengineering.stackexchange.com/questions/126545", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/43136/" ] }
126,550
As far as I know, the terminology comes from how the inheritance hierarchy is traditionally displayed, with the extending types at the bottom, and the parent types at the top. This is a bit pointless, unless you feel like calling the following leftcasting and rightcasting . I am not looking for opinions why the terminology is as is, but they are more than welcome as comments. I am looking for references on who first introduced up and downcasting, and why they decided on that name .
You should never-ever have to make public (or at least internal ) fields that would have been private otherwise, to un- readonly them, make private methods protected virtual instead Making a private member non-private turns the object into a Leaky Abstraction ( wikipedia ) which is the cause of much weeping and wailing and gnashing of teeth . ( wiktionary ) Un-readonlying a field can make an immutable object mutable, which is nothing short of a disaster. Your test should consider your object-under-test as a Black Box ( Wikipedia ), meaning that it should only be concerned with the public interface of the object-under-test, not with details of its implementation. If an object cannot be sufficiently tested via its public interface, then you need to find ways to provide formal and useful extensions to its interface that would facilitate testing. For example, a registration system which only exposes a pair of Register() / Deregister() methods is what I call a Black-Hole Interface because information can only enter it but never leave. To fix this: The solution is most definitely NOT to expose its internal collection of registrants, so that the test can go peeking into that collection. The solution is to add an IsRegistered() method, even it will never be invoked by the application at hand; still, it is a formal and useful extension to the interface, it prevents the interface from being a black-hole interface, and incidentally, it accommodates Black-Box Testing. The important thing is that changing the implementation of the object should not require you to change the unit test. In principle you should be able to completely rewrite the module under test, and reuse the existing test to make sure that the new version works exactly as the old version. This is only possible if the test is strictly a black-box test. Amendment 2022-11-02 for further reading: michael.gr - White-Box vs. Black-Box Testing michael.gr - Incremental Integration Testing
{ "source": [ "https://softwareengineering.stackexchange.com/questions/126550", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/15464/" ] }
126,601
Assuming a comment won't fit (or can't go) on the line it applies to, should one write the comment before the code or after? Well, wherever future readers will best understand the comment's scope. In other words, wherever most programmers/scripters put such comments. So where do most programmers/scripters put a comment: before or after its code? If your answer applies only to specific languages, please indicate which. And if you can cite an accepted spec or guide that supports your answer, so much the better.
I prefer comments to be above the code they refer to, it just makes more sense to tell a person reading code about what is coming up rather than trying to refer to previous code to explain that some messy lines of code fixed some bug that was tricky so don't touch it.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/126601", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/30490/" ] }
126,652
Should the expected results of a unit test be hardcoded, or can they be dependant on initialised variables? Do hardcoded or calculated results increase the risk of introducing errors in the unit test? Are there other factors I haven't considered? For instance, which of these two is a more reliable format? [TestMethod] public void GetPath_Hardcoded() { MyClass target = new MyClass("fields", "that later", "determine", "a folder"); string expected = "C:\\Output Folder\\fields\\that later\\determine\\a folder"; string actual = target.GetPath(); Assert.AreEqual(expected, actual, "GetPath should return a full directory path based on its fields."); } [TestMethod] public void GetPath_Softcoded() { MyClass target = new MyClass("fields", "that later", "determine", "a folder"); string expected = "C:\\Output Folder\\" + string.Join("\\", target.Field1, target.Field2, target.Field3, target.Field4); string actual = target.GetPath(); Assert.AreEqual(expected, actual, "GetPath should return a full directory path based on its fields."); } EDIT 1: In response to DXM's answer, is option 3 a preferred solution? [TestMethod] public void GetPath_Option3() { string field1 = "fields"; string field2 = "that later"; string field3 = "determine"; string field4 = "a folder"; MyClass target = new MyClass(field1, field2, field3, field4); string expected = "C:\\Output Folder\\" + string.Join("\\", field1, field2, field3, field4); string actual = target.GetPath(); Assert.AreEqual(expected, actual, "GetPath should return a full directory path based on its fields."); }
I think calculated expected value results in more robust and flexible test cases. Also by using good variable names in the expression that calculate expected result, it is much more clear where the expected result came from in the first place. Having said that, in your specific example I would NOT trust "Softcoded" method because it uses your SUT (system under test) as the input for your calculations. If there's a bug in MyClass where fields are not properly stored, your test will actually pass because your expected value calculation will use the wrong string just like target.GetPath(). My suggestion would be to calculate expected value where it makes sense, but make sure that the calculation doesn't depend on any code from the SUT itself. In Response to OP's update to my response: Yes, based on my knowledge but somewhat limited experience in doing TDD, I would choose option #3.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/126652", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/33490/" ] }
126,671
So many times on this site I see people trying to do things like this : <script type="text/javascript"> $(document).ready(function(){ $('<?php echo $divID ?>').click(funtion(){ alert('do something'); }); }); </script> I don't think that this is some sort of pattern that people naturally fall into. There must be some sort of tutorial or learning material out there that is showing this, otherwise we wouldn't see it so much. What I'm asking is, am I making too big a deal of this or is this a really bad practice? EDIT : Was speaking to a friend of mine about this who often puts ruby in his JavaScript and he brought up this point. Is it ok to dynamically place application wide constants in your JavaScript so you don't have to edit two files. for example... MYAPP.constants = <php echo json_encode($constants) ?>; also is it OK to directly encode data you plan to use in a library ChartLibrary.datapoints = <php echo json_encode($chartData) ?>; or should we make an AJAX call every time?
Typically, it is bad practice to use language X to generate code in language Y. Try decoupling the two languages by making data their only interface -- don't mingle the code . In your example, you could improve the code by using PHP to populate a cfg structure that's available to JavaScript: <script type="text/javascript"> var cfg = { theId: "<?php echo $divID ?>", ... }; $(document).ready(function(){ $("#" + cfg.theId).click(funtion(){ alert('do something'); }); }); </script> This way, PHP only cares about populating the data structure and JavaScript only cares about consuming the data structure. This decoupling also leads the way to loading the data asynchronously (JSON) in the future. Update: To answer the additional questions you asked with your update, yes, it would be good practice to apply the DRY principle and let PHP and JavaScript share the same configuration object: <script type="text/javascript"> var cfg = <?php echo json_encode($cfg) ?>; ... There is no harm in inserting the JSON representation of your configuration directly into your page like this. You don't necessarily have to fetch it via XHR.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/126671", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/28607/" ] }
126,721
My coworkers tell me there should be as little logic as possible in getters and setters. Yet, I am convinced that a lot of stuff can be hidden in getters and setters to shield users/programmers from implementation details. An example of what I do: public List<Stuff> getStuff() { if (stuff == null || cacheInvalid()) { stuff = getStuffFromDatabase(); } return stuff; } An example of how work tells me to do things (they quote 'Clean Code' from Uncle Bob): public List<Stuff> getStuff() { return stuff; } public void loadStuff() { stuff = getStuffFromDatabase(); } How much logic is appropriate in a setter/getter? What's the use of empty getters and setters except a violation of data hiding?
The way work tells you to do things is lame. As a rule of thumb, the way I do things is as follows: if getting the stuff is computationally cheap, (or if most chances are that it will be found in the cache,) then your style of getStuff() is fine. If getting the stuff is known to be computationally expensive, so expensive that advertising its expensiveness is necessary at the interface, then I would not call it getStuff(), I would call it calculateStuff() or something like that, so as to indicate that there will be some work to do. In either case, the way work tells you to do things is lame, because getStuff() will blow up if loadStuff() has not been called in advance, so they essentially want you to complicate your interface by introducing order-of-operations complexity to it. Order-of-operations is pretty much about the worst kind of complexity that I can think of.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/126721", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/36776/" ] }
126,807
I've dipped my toes in C++ programming but I haven't actually delved into it. I want to know if I actually need to learn it or any other languages before I go to college for Computer Engineering or am I just going to learn it at school anyway?
Yes, for several reasons. The sooner you start getting experience with the field, the sooner you'll be able to figure out if this is the field you should be majoring in. If you don't like programming, then Computer Engineering is probably not a good field to major in. Even if you are mostly interested in designing CPUs, you are going to be doing a LOT of programming. Most folks find that the level and amount of work expected from them in college greatly exceeds what they are used to in secondary school. Unless you are exceptionally talented, you are going to find yourself hammered with work. Do yourself a favor: find out which language is used for teaching the first year course and start learning it now. Worst case, you'll be a little bored in the class, but you'll be able to get the work done faster and use the time saved for your other classes. Many classes are graded on a curve. The downside of this is that you will be competing with your fellow classmates for grades. Many of your fellow computer engineering students will have already done a lot of programming, so you may already be behind the curve. This is a good time to start catching up. The only way to get good at programming is to do a lot of it. The more time you spend programming in the next few years, the better you will be at it. The more experience you have, the better the chance you have at landing internships and jobs.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/126807", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/43672/" ] }
126,846
I'm doing the same. When there is something "to-do" in my code, I write //TODO ... . But I'm curious to know when this started and if there is a reason for writing "to-dos" in all capital letters?
Also, it's great to have a way to easily search for code sections you glossed over and want to get back to. You can do a case-sensitive search for "TODO" to immediately find what you skipped before. "todo" (lower-case) could potentially be part of a larger word/function/variable, but "TODO" (upper-case) is probably not going to be.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/126846", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/24256/" ] }
126,959
Periodically, I wonder about this: The short-circuit OR would always return the same value that the unshort-circuited OR operator would? I expect that the short-circuit OR would always evaluate more quickly. So, was the unshort-circuited OR operator included in the C# language for consistency? What have I missed?
Both operands were meant for different things, coming from C which didn't have a boolean type. The short-circuit version || only works with booleans, while the non-short circuit version | works with integral types, performing a bitwise or. It just happened to work as a non-short circuit logical operation for booleans which are represented by a single bit, being 0 or 1. http://en.wikibooks.org/wiki/C_Sharp_Programming/Operators#Logical
{ "source": [ "https://softwareengineering.stackexchange.com/questions/126959", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/17438/" ] }
127,023
I was reading another thread where a guy asked about C++ books for beginners, and one of the programmers answering wrote this: Some warnings: avoid all books that present an "hello world" stating with #include <iostream.h> I opened my C++ book and sure enough it included the iostream header like the example above. Why is that bad? What other pointers should I keep in mind when learning C++? Background: I am proficient with C and I'll start to learn C++ this next semester.
The header iostream.h is a non-standard header and does not exist on all platforms. As a matter of fact it does not exist on my system (using g++ and the GNU libstdc++). So any code using it would simply not compile on my system. The iostream.h header used to be common before C++ was first standardized in 1998. But since the 98 standard used <iostream> instead of <iostream.h> , the latter has fallen out of favor (being non-standard and all) and is no longer supported on all platforms. Code that uses it should be considered non-standard legacy code and is not portable. Books that teach it should be considered outdated and avoided.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/127023", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/34340/" ] }
127,106
I'm going to be developing an Intranet site for my specific plant, and our company standard for web-development is IIS + ASP.Net + VB.Net + Microsoft SQL Server (note that we have about 10+ plants). The Intranet site will only be used by my plant, and I'm the only one who will support it. I'm much more proficient with a LAMP setup, and I could do development and problem solving much more rapidly with PHP than I could ASP.Net. Even though the company "standard" is ASP.Net/VB.Net, most of what the company does as a whole is to purchase third party software (which is usually Java based), and very, and I do mean very few people in the company even know VB6 , let alone ASP.Net/VB.Net. That being said, is it better to violate the company standard and go with the setup that I can support better, or is it better to go with the setup that the company can support better if I ever were to leave, even though no one currently in the company can support their own standard anyway? Some additional factors to consider in my personal case: Again, this is only for my plant, and I'm the only one who will ever be supporting it unless I leave the company, and then my replacement would be supporting it. Not someone else already in the company. The company does very little development with their standard anyway. Hardly any of the companies existing software uses their standard. If I choose the company standard, then I have to use the Express version of Microsoft SQL and a Windows 7 OS. From my readings, the Express version is okay for business use, but the database size is limited.
Again, this is only for my plant, and I'm the only one who will ever be supporting it unless I leave the company, and then my replacement would be supporting it. Not someone else already in the company. 1 -- Don't Assume you're the only one who's going to support this. You do like your sick time and vacation, right? What if you need to take extended maternity/paternity leave or something? Who's going to support your app then? Also, what if you'd like to talk to someone about technical problems specific to your company? What if you'd like to have code reviews? Or need help with a tricky bug? In all these cases it helps to be amongst others with insight into the technology you're using -- specifically how it can be applied to solve your company's specific problems. The company does very little development with their standard anyway. 2 -- Just because some document exists doesn't mean its really the standard or means anything. It may simply mean that there's a group of politically influential individuals advocating this approach, and it may turn out there's other sub groups taking different approaches. Your problem may simply be that the "standard" has evolved to some de-facto, undocumented state that conflicts with the documented "standard". Or there may be many little unofficial groupings of people using different "standards" -- with one of those groups managing to make their "standard" made official. You probably need to ask around to figure out what others in your group -- who might also support you and your app -- consider best practices. Ask what they feel comfortable with you using. Map out the real technology landscape of your company and figure out where the know how is. Just like any good piece of evolving tribal knowledge the only way you're going to know how to proceed is to talk to people. 3 -- Don't pass up professional opportunities to learn new things . You have to guard against being pigeonholed in this industry. Be nimble. You may have an opportunity to gain some breadth and learn a new way to solve a problem. Not to mention you're gaining new skills for your resume. It can mostly only help you to have to step outside your comfort zone to do something new. That being said, if the different/new thing is so extremely niche that you don't think you or any future employers will get any value from those skills, then maybe this isn't such a great opportunity. But getting a chance to be both an ASP.net and LAMP expert will certainly open your eyes and can only help your career. There's nothing like a real project with a deadline to force you to really learn something. So my advice, don't go it alone. Figure out where people really stand and decide where you can best fit in. If you need to step out of your comfort zone, use this as an opportunity to grow professionally.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/127106", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/43735/" ] }
127,118
Is it a good rule of thumb to always write code for the intent of re-using it somewhere down the road? Or, depending on the size of the component you are writing, is it better practice to design it for re-use when it makes sense with regards to time spent on it. What is a good rule of thumb for spending extra time on analysis and design on project components that have "some probability" of being needed later down the road for other things that may or may need this part. For example, if I have the need for project X to do things A , and B . A definitely needs to be written for re-use because it just makes sense to do so. B is very project specific at the moment, and I can hack it all together in a couple days to finish the project on time and give everyone kudos for being a great team, etc. Or if we say, lets spend a whole friggin' 2 weeks figuring out what project Y/Z might need this thing for and spend a load of extra time on on part B because someday we might need to use it on project Y/Z (where the savings will be realized). I'd imagine a perfect world situation would be a nicely crafted combination of project specific vs. re-use architected components given the project. However some code shops might feel it would be a great idea to write everything for the intention of using it at some point down the road.
Obligatory XKCD #974 : In my experience, trying to always make everything reusable may lead to over-engineering. However, a certain non-excessive amount of effort to make things reusable is always worth trying to put into things, (when things are amenable to reusability,) because a reusable component is a general purpose component, which means that its interface is abstracted, which means that it is easier to understand it on its own, and figure out how it works and why it works, and what it does and how it does it, when you look at it on its own, or when you look at it 6 months down the road and you don't even remember ever having written it. And these benefits are of course in addition to the fact that it is reusable. So, to cut a long story short, I believe that the rule of thumb should be: Make it reusable if doing so will make its interface and / or its implementation easier to understand.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/127118", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/32455/" ] }
127,178
Just browsing the google maps source code. In their header, they have 2 divs with id="search" one contains the other, and also has jstrack="1" attribute. There is a form separating them like so: <div id="search" jstrack="1"> <form action="/maps" id="...rest isn't important"> ... <div id="search">... Since this is google, I'm assuming it's not a mistake. So how bad can it really be to violate this rule? As long as you are careful in your css and dom selection, why not reuse id's like classes? Does anyone do this on purpose, and if so, why?
Specification says UNIQUE HTML 4.01 specification says ID must be document-wide unique. HTML 5 specification says the same thing but in other words. It says that ID attributes must be unique amongst all the IDs in the element's tree , which is basically the document if we read the definition of it . Avoid duplication But since HTML renderers are very forgiving when it comes to HTML rendering they tolerate duplicate IDs. This should be avoided if at all possible and strictly avoided when programmatically accessing elements by IDs in JavaScript. I'm not sure what getElementById function should return when several matching elements are found? Should it: return an error? return first matching element? return last matching element? return a set of matching elements? return nothing? But even if browsers work reliably these days, nobody can guarantee this behavior in the future since this is against specification. That's why I recommend you never duplicate IDs within the same document.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/127178", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/42738/" ] }
127,245
Let's say that I want to show a form that represents 10 different objects on a combobox. For example, I want the user to pick one hamburguer from 10 different ones that contain tomatoes. Since I want to separate UI and logic, I'd have to pass the form a string representation of the hamburguers in order to display them on the combobox. Otherwise, the UI would have to dig into the objects fields. Then the user would pick a hamburguer from the combobox, and submit it back to the controller. Now the controller would have to find again said hamburguer based on the string representation used by the form (maybe an ID?). Isn't that incredibly inefficient? You already had the objects you wanted to pick one from. If you submited to the form the whole objects, and then returned a specific object, you wouldn't have to refind it later on since the form already returned a reference to that object. Moreover, if I'm wrong and you actually should send the whole object to the form, how can I isolate UI from logic?
First of all, the example you provided is not incredibly inefficient; its only slightly inefficient; its inefficiency is below the perceptible level. But, in any case, let's move on with the question. The way I understand it, when we speak of separation of UI and Logic , we mean avoidance of close coupling . Close coupling refers to the situation in which the UI knows (and invokes) the logic, and the logic knows (and invokes) the UI. To avoid close coupling one does not need to resort to abolishing coupling altogether. (That's what you seem to be aiming at by demolishing the interface between them down to a least-common-denominator string interface.) All one needs to do is to employ loose coupling . Loose coupling means that A knows B, but B does not know A. In other words, the two parties involved play distinct client and server roles, where the client knows the server, but the server does not know the client. In the case of UI and logic, the best way of arranging this in my opinion is by seeing the logic as a server, and the UI as a client. So, the UI is built for the logic, has knowledge of the logic, and invokes the logic, while the logic knows nothing about the UI, and simply responds to the requests that it receives. (And these requests happen to come from the UI, but the logic does not know that.) To put it in more practical terms, nowhere within the source code files of the logic should you find any include/import/using statements that refer to UI files, while the source code files of the UI will be full of include/import/using statements that refer to Logic files. So, to come back to your case, there is absolutely nothing wrong with the fact that the UI code which populates the combo-box knows about the hamburger class. There would be a problem if the hamburger class knew anything about combo boxes. Incidentally, this design allows another thing which you should expect from such a system: it should be possible to plug as many different UIs as you wish to the logic, and the whole thing should still work.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/127245", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/43881/" ] }
127,268
I have a GitHub repository with toy programs that I write when I learn something. For example, when I read an about an algorithms or data structures, I write up a quick implementation of of it to make sure that it works and I understand it. I sometimes solve algorithm and data structure puzzles and that gets pushed into the repository. Would this repository be worth linking on my résumé, or would it actually be a detriment to my chances of getting hired?
I once heard a résumé described as "a balance sheet that shows only your assets but not your liabilities". Based on this definition, you want to include projects that will be an asset to you in getting the job while leaving out those that might be a liability. This means they should be relevant to the job you are applying for and show off your best work . Even if you consider your code "toy programs" this doesn't means they can't be well structured. Hence, don't include throw-away code or dirty hacks. Keep those in a private repository. And of course you should be able to talk about your programs , the design decisions that went into them etc. I once had a candidate who claimed to have done this awesome project a year before, but then couldn't tell me anything about it. Not so good.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/127268", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/17887/" ] }
127,359
I'm a 16 year old high school student, with a passion for computer programming. I'm in grade 11, and I've been learning it as part of the school syllabus for about 8 months. I've gone beyond what's being taught at school, and witten a few (reasonably good) applications. The language that I program in is C++, on the windows platform. Eventually I'd like to major in CS at a good college and then work as a programmer. The specific questions that I have are: What is a good place for me to get my work noticed ? Are there any journals, or publications specifically for young programmers ? If not publications, then are there any good blogs, or is it just better to start your own ? Is there anything else, that would help me get noticed as a programmer ? I've tried to be as objective as possible. If all this was tl;dr: What's the best way to get noticed as a young programmer ? Edit: I am not looking to get hired straight out of high school The aim is not to impress the community as a professional programmer. This is with intent towards college applications where having your work seen and critiqued by other people will improve your application. I am not looking to earn money from what I've written (so far).
I'd like to give you some warning and some suggestions. Warnings: Don't over-estimate your knowledge: right now I can assume you know enough to write a simple application and more than what is actually taught in class. But that doesn't make you a "professional programmer"; it can make you a "freelancer" at most. Don't under-estimate the value of what is taught in school. Even if something may seem obvious to you, study it as well: you'll see "new aspects" (I'll be more clear later) as you will proceed. Suggestions: Professional applications have typical life-cycle of 3-5 years and require thousands (up to millions) man-hour working. They cannot be deployed by a developer alone. Professional programmers have to work with others. It's not just a matter of good knowledge of tools (like languages, IDEs etc.) but also of techniques, methods and idioms. While tools can be taught by formal samples and exercise, techniques and idioms can only be "described". To "learn" them you have to experience and share the experience with others. They are continuously invented and improved. Companies, when hiring from school, check your understanding of tools and your ability with basic technics, but - most important - test your capability in rapidly learning new things and "capture the work" as it is needed. When hiring for experienced people, they look at how many things they have done and what experience they got from those things. Moral: If you want to be more "evaluated" learn to work with others, by participating in other's problems (like on stackoverflow ) or open projects (like on sourceforge ) Also, don't be too fast to ask for money; split your "code production" in "something to share" and "something to sell". What you can share can be used by others but can also attract the participation of others to expand the initial project. What you can sell is what makes your app "unique" with respect to other similar projects, leading it to become a real commercial product. To share code with others, you can refer to site codeproject or sourceforge . Their rating also gives an idea of how interesting what you did was to other people.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/127359", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/40992/" ] }
127,466
Commenting nowadays is easier than ever. In Java, there are some nice techniques for linking comments to classes, and Java IDEs are good at making comment shells for you. Languages like Clojure even allow you to add a description of a function in the function code itself as an argument. However we still live in an age where there are often obsolete or poor comments written by good developers - I'm interested in improving the robustness and usefulness of my comments. In particular I'm interested in Java/Clojure/Python here, but answers don't need be language-specific. Are there any emerging techniques that validate comments and automatically detect either "flimsy" comments (for example comments with magic numbers, incomplete sentences, etc..) or incorrect comments (for example, detecting mispelled variables or the like). And more importantly: Are there accepted "commenting-policies" or strategies out there? There is plenty of advice out there on how to code - but what about "how to comment?"
Names/documentation should tell you what you are doing. Implementation should tell you how you are doing it. Comments should tell you why you do it the way you do.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/127466", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/39477/" ] }
127,579
I am trying to make my code more robust and I have been reading about unit testing, but I find it very hard to find an actual useful use. For instance, the Wikipedia example : public class TestAdder { public void testSum() { Adder adder = new AdderImpl(); assert(adder.add(1, 1) == 2); assert(adder.add(1, 2) == 3); assert(adder.add(2, 2) == 4); assert(adder.add(0, 0) == 0); assert(adder.add(-1, -2) == -3); assert(adder.add(-1, 1) == 0); assert(adder.add(1234, 988) == 2222); } } I feel that this test is totally useless, because you are required to manually compute the wanted result and test it, I feel like a better unit test here would be assert(adder.add(a, b) == (a+b)); but then this is just coding the function itself in the test. Can someone provide me with an example where unit testing is actually useful? FYI I am currently coding mostly "procedural" functions that take ~10 booleans and a few ints and give me an int result based on this, I feel like the only unit testing I could do would be to simply re-code the algorithm in the test. edit: I should also have precised this is while porting (possibly badly designed) ruby code (that I didn't make)
Unit tests, if you're testing small enough units, are always asserting the blindingly obvious. The reason that add(x, y) even gets mention of a unit test, is because sometime later somebody will go into add and put special tax logic handling code not realizing that add is used everywhere. Unit tests are very much about the associative principle: if A does B, and B does C, then A does C. "A does C" is a higher-level test. For instance, consider the following, completely legitimate business code: public void LoginUser (string username, string password) { var user = db.FetchUser (username); if (user.Password != password) throw new Exception ("invalid password"); var roles = db.FetchRoles (user); if (! roles.Contains ("member")) throw new Exception ("not a member"); Session["user"] = user; } At first glance this looks like an awesome method to unit test, because it has a very clear purpose. However, it does about 5 different things. Each thing has a valid and invalid case, and will make a huge permutation of unit tests. Ideally this is broken down further: public void LoginUser (string username, string password) { var user = _userRepo.FetchValidUser (username, password); _rolesRepo.CheckUserForRole (user, "member"); _localStorage.StoreValue ("user", user); } Now we're down to units. One unit test does not care what _userRepo considers valid behavior for FetchValidUser , only that it's called. You can use another test to ensure exactly what a valid user constitutes. Similarly for CheckUserForRole ... you've decoupled your test from knowing what the Role structure looks like. You've also decoupled your entire program from being tied strictly to Session . I imagine all the missing pieces here would look like: class UserRepository : IUserRepository { public User FetchValidUser (string username, string password) { var user = db.FetchUser (username); if (user.Password != password) throw new Exception ("invalid password"); return user; } } class RoleRepository : IRoleRepository { public void CheckUserForRole (User user, string role) { var roles = db.FetchRoles (user); if (! roles.Contains (role)) throw new Exception ("not a member"); } } class SessionStorage : ILocalStorage { public void StoreValue (string key, object value) { Session[key] = value; } } By refactoring you have accomplished several things at once. The program is way more supportive of tearing out underlying structures (you can ditch the database layer for NoSQL), or seamlessly adding locking once you realize Session isn't thread-safe or whatever. You've also now given yourself very straightfoward tests to write for these three dependencies. Hope this helps :)
{ "source": [ "https://softwareengineering.stackexchange.com/questions/127579", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/44000/" ] }
127,624
As a serious programmer, how do you answer the question What is MVC? In my mind, MVC is sort of a nebulous topic — and because of that, if your audience is a learner, then you're free to describe it in general terms that are unlikely to be controversial. However, if you are speaking to a knowledgeable audience, especially an interviewer, I have a hard time thinking of a direction to take that doesn't risk a reaction of "well that's not right!...". We all have different real-world experience, and I haven't truly met the same MVC implementation pattern twice. Specifically, there seem to be disagreements regarding strictness, component definition, separation of parts (what piece fits where), etc. So, how should I explain MVC in a way that is correct, concise, and uncontroversial?
MVC is a software architecture - the structure of the system - that separates domain/application/business (whatever you prefer) logic from the rest of the user interface. It does this by separating the application into three parts: the model, the view, and the controller. The model manages fundamental behaviors and data of the application. It can respond to requests for information, respond to instructions to change the state of its information, and even to notify observers in event-driven systems when information changes. This could be a database, or any number of data structures or storage systems. In short, it is the data and data-management of the application. The view effectively provides the user interface element of the application. It'll render data from the model into a form that is suitable for the user interface. The controller receives user input and makes calls to model objects and the view to perform appropriate actions. All in all, these three components work together to create the three basic components of MVC.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/127624", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/2314/" ] }