source_id
int64 1
4.64M
| question
stringlengths 0
28.4k
| response
stringlengths 0
28.8k
| metadata
dict |
---|---|---|---|
127,639 | I've noticed than many numerical sorting methods seem to sort by 1, 10, 2, 3... rather than the expected 1, 2, 3, 10... I'm having trouble coming up with a scenario where I would need the first method and, as a user, I get frustrated whenever I see it in practice. Are there legitimate use cases for the first style over the second? If so, what are they? If not, how did the first sort style ever come into being? What are the official names for each sort method? | that is lexicographic sorting which means basically the language treats the variables as strings and compares character by character ( "200" is greater than "19999" because '2' is greater than '1' ) to fix this you can ensure that the values are treated as integers, prepend '0' to the strings so all have equal lengths (only viable when you know the max value). This is why you'll see episode numberings on media files (S1E01) with a prepended 0 so a lexicographic sort doesn't mess things up and allows programs to simply play/display in alphabetical order, or make a custom comparator that first compares the length of the strings (shorter strings being smaller integers) and when they are equal compare the lexicographically (careful about leading '0' ) | {
"source": [
"https://softwareengineering.stackexchange.com/questions/127639",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/73/"
]
} |
127,658 | Working as a freelancer, I receive many weird, invalid or incomplete requests from the actual or potential customers. The most frequent case is this one: Hi, I need a website where people can register and there are also postings and ratings. How much will it cost to me? Thank you. The request sucks, but it doesn't mean that a customer like this is not worth it . This person doesn't know how to make a request correctly, but with a bit of effort and a bit of learning and advice , this person may become a valuable customer who will not waste my time. For a while, I just replied by asking to provide details . They never do. Recently, I decided to reply in a more detailed way , explaining why is it impossible to give a price (except telling that it would be somewhere between $500 and $50 000). First I just made a simple explanation, telling that their description of the project is too sparse. Then I added further info, metaphors, etc., or by making comparisons with other domains which are better known by people with no technical background. For example: “Imagine you want to build a two-storey house. Do you believe it's possible to determine the cost of building a house just by knowing the number of storeys? You probably need to provide much more details: is it built with rock or wood? Are there solar panels on the roof? Is there a swimming pool in the backyard? A large Victorian-style house using the newest technologies, with two garages, a large terrace, etc. will cost much more than an tiny modest two-storey house for a family who really don't have too much money to spend.” It's still not working: those potential customers never respond. I also tried the "let me gather the project requirements for you from scratch and do the specification and architecture, but don't forget to pay me for this" technique, but it looks like scam¹. In all cases, in my country (France), this never works with new customers for several reasons. Some hints show me that some of those people actually succeed finding a developer and succeed with their project . It means that my approach with those potential customers fails, while there is one approach used by someone else which works well. How to reply to such price requests, considering that those people don't know me, don't trust me yet, don't want to spend days writing a detailed document describing every functionality of the project, and sometimes don't even know precisely what they want, but are not ready to pay you thousands of dollars just for requirements, specification and architecture steps? ¹ Most projects are small enough and have tiny funding; most customers don't bother to know that the source code is clean and maintainable, that it was regularly refactored, and that you have unit and integration tests. They want to pay less, now, no matter how expensive it would be later to maintain the codebase. In this context, talking about functional and non functional requirements, architecture etc., is perceived like the attempt either to waste half of the customer's money doing marketing jibber-jabber instead of writing code, or to scam them by making them pay for something they don't need nor understand, and then disappearing with their money when it will come to actually writing source code. They don't know that you are a professional, and they even don't care. | There are many ways to answer such queries - Answer 1: It will cost you X Euros per hour to define the system, after which I can give you a fixed price for a set of agreed-upon features. Answer 2: send them a video clip of sharks in a feeding frenzy, and ask them how long it will take them to count the fish (not just the sharks), given that they can't see the whole scene. Answer 3: Politely and briefly explain that there are many factors involved in creating a good web site, and ask them what their budget is. Answer 1 is logical...to developers. And sadly almost no one else. I recently had a conversation with a construction firm along the lines of "we flat-bid complex construction projects every day, why can't you put a price on a simple application?" To which I politely replied "the blueprints for your application are blank . Do you flat-bid undefined jobs?" Answer 2 is illustrative, and satisfying, but probably won't get you clients. Though it may make a good blog post for me later... Answer 3 is preferred, for two reasons: it implies but does not state directly that you don't do analysis and design work for free it establishes whether or not the potential client has a budget in mind or in hand yet The latter is critical - no budget = no project. If you've got time to kill, or some reason to believe that the "potential" client could eventually become and "actual" client with a little coaching, by all means offer to educate them. But don't be surprised if they don't appreciate it. The effort required to create quality custom software is not a widely understood phenomenon. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/127658",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/6605/"
]
} |
127,668 | I know the 32 bit registers were named like the 16 bit registers with an 'E' prefix to mean extended. I've always assumed that meant extended from 16 to 32 bits although I've never seen that explicitly stated. I was trying to find out what the 'R' stands for but my google skills have failed me. Anyone know? | It means register , and it isn't all for historical reasons. The historical part is that Intel got itself into the habit of enumerating registers with letters with the 8008 (A through E plus H and L). That scheme was more than adequate at the time because microprocessors had very few registers and weren't likely to get more, and most designs did it. The prevailing sentiment then was that software would be rewritten for new CPUs as they appeared, so changing the register naming scheme between models wouldn't have been a big deal. Nobody foresaw the 8088 evolving into a "family" after being incorporated into the IBM PC, and the yoke of backward compatibility pretty much forced Intel into having to adopt schemes like the "E" on 32-bit registers to maintain it. The non-historical part is all practical. Using letters for general-purpose registers limits you to 26, fewer if you weed out those that might cause confusion with the names of special-purpose registers like the program counter, flags or the stack pointer. I don't have a source to confirm it, but I suspect the choice of R as a prefix and the introduction of R8 through R15 on 64-bit CPUs signals a transition to numbered registers, which have been the norm among 32-bit-and-larger architectures not derived from the 8008 for almost half a century. IBM did it in the 1960s with the 360 and has been followed by the PowerPC, DEC Alpha, MIPS, SPARC, ARM, Intel's i860 and i960 and a bunch of others that are long-forgotten. You'll note that the existing registers would fit nicely into R0 through R7 if they existed, and it wouldn't surprise me a bit if they're treated that way internally. The existing long registers (RAX/EAX/AX/AL, RBX/EBX/BX/BL, etc.) will probably stay around until the sun burns out. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/127668",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/5424/"
]
} |
127,669 | What are some characteristics of Python that makes it unique as its own language? I'm looking for any sort of characteristics ranging from good to bad, useful to hindrance, syntax to real-world usage, but non-obscure observations would be the most useful for the average developer. I'm a newb here, so intuitive things may need to be explained..... | You'll have a hard time finding features which are absolutely unique . Most language features in existence have been adopted in more than one language since their inception. Some may be rarer, mostly because they're either new and still in obscurity, or died out for good reason. Nevertheless, even then you'd be better off looking at combinations of features. That said, several features of Python should make for a relatively unique combination. At least I'm not aware of any languages remotely as popular (and practical) with a mostly overlapping feature set. As noted in comments, Ruby is pretty close, but there are nevertheless numerous differences. Metaclass -based metaprogramming. Basically, running arbitary code on class creation. Makes for very nice class customization with very little work on the recieving end - e.g. for an Object-relational Mapping (ORM), client classes can be written as usual with a few extra lines like attr = SomeDataType() and a ton of code is generated automatically. An example of this are Django's "models" . You're encouraged to use iterators for everything . This is especially apparent in 3.x, where most list-based alternatives with an iterator-based equivalent have been abolished in favour of the latter. Iterators also serve as nigh-universal interface for collections (both those you actually have in memory and those you only need once and thus create with the features below). Collection-agnostic, space-efficent ( O(1) space for intermediate results often follows naturally, very few tasks actually need all items in memory at once), composable data crunching has never been easier. Generator expressions, related to the above. Many will have heard of list comprehensions (creating a list from another iterable, filtering and mapping in the process, with very convenient syntax). Forget about them, they're syntactic sugar, a special case. Generator expressions are very close in syntactically and ultimately result in the very same sequence of items, but they produce results lazily (and thus take O(1) space unless you explicitly keep the results around). yield , which mainly make writing iterators (called generators here) far nicer. They're the big brother of the above, supporting all kinds of control flow. C# has something similar, with the same keyword. But yield is also overloaded to support a limited kind of coroutines (Lua for instance has more elaborate support) which has nevertheless been put to good use by clever people working on hard problems. Two examples off the top of my head: Recursive descent parsing with backtracing and no stack limit and asynchronous I/O (with convenient syntax). Multi-target assignment and iterable unpacking. Assignment on steroids. Not only you can assign to multiple values at once (even for swapping values and when iterating - for key, value in mapping.items() ), you can unpack any iterable of known length (honestly, mostly tuples) into multiple variables. Since 3.x it's even practical for collections of unknown length as you can specify a few variables taking single items and one taking whatever remains: first, *everything_in_between, last = values . Descriptors , probably the most powerful among the various ways to customize attribute access. There are properties (as in C#, but without special language support), static methods, class methods, etc. all implemented as descriptors. They're first-class objects as well. Just a week ago, I've been faced with repetive and tricky code in properties - so I wrote a small function generating the repetive part and wrapping it up in a propery object. Purely offside rule (indentaion for delimiting blocks). I put this last intentionally. While it does distinguish Python, it doesn't really stand out in everyday programming once you're used to it (or at least that's my experience). | {
"source": [
"https://softwareengineering.stackexchange.com/questions/127669",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/44052/"
]
} |
127,672 | Is Javascript a functional language? I know it has objects & you can do OOP with it also, but is it also a functional language, can it be used in that way? You know how OOP became/seems like the next evolution in programming, does that mean that 'Functional Programming' is the next evolution(Note: this is NOT a prompt for opinion BUT a prompt for a factual evidence based answer, & this note is more for the moderators than the contributors ;) ). I learn best through examples, maybe someone could show performing the same task in a OOP way & then in a Functional Programming way for myself to understand & compare what functional programming does/is. I don't really completely understand 'Functional Programming' to be honest :P So comparing Javascript to functional programming may be totally incorrect. To put Functional programming in laymans terms: is it simply the benefit of abstration THROUGH using anonymous functions? Or is that way too simple? In a simple way, OOP is the benefit of abstraction through objects, but I believe thats being a little too simplistic to describe OOP. Is this a good example of functional programming?... Javascript OOP Example: // sum some numbers
function Number( v )
{
this.val = v;
}
Number.prototype.add( /*Number*/ n2 )
{
this.val += n2.val;
} Functional programming example: function forEach(array, action)
{
for (var i = 0; i < array.length; i++)
action(array[i]);
}
function add(array)
{
var i=0;
forEach(array, function(n)
{
i += n;
});
return i;
}
var res = add([1,9]); | Is Javascript a functional language? I know it has objects & you can do OOP with it also, but is it also a functional language, can it be used in that way? Sometimes, people will say functional programming, when what they mean is imperative programming or procedural programming . Strictly speaking, functional programming is: In computer science, functional programming is a programming paradigm that treats computation as the evaluation of mathematical functions and avoids state and mutable data . It emphasizes the application of functions, in contrast to the imperative programming style, which emphasizes changes in state. Functional programming has its roots in lambda calculus, a formal system developed in the 1930s to investigate function definition, function application, and recursion. Many functional programming languages can be viewed as elaborations on the lambda calculus. Although Javascript is not widely known or used as a functional language, it does have some functional elements : JavaScript has much in common with Scheme. It is a dynamic language. It has a flexible datatype (arrays) that can easily simulate s-expressions. And most importantly, functions are lambdas. Scheme is a dialect of Lisp , and probably one of the languages most programmers think of when they discuss functional programming. When it comes to object orientation , Javascript is an object oriented language. But its object orientation is prototype based : Prototype-based programming is a style of object-oriented programming in which classes are not present, and behavior reuse (known as inheritance in class-based languages) is performed via a process of cloning existing objects that serve as prototypes. This model can also be known as classless, prototype-oriented or instance-based programming. Delegation is the language feature that supports prototype-based programming. So although Javascript is object oriented, it doesn't follow the more common class based model , as do languages as C++, C#, Java and PHP (and quite a few others). And of course it's also an imperative language, which leads to the confusion with functional programming I described above. You know how OOP became/seems like the next evolution in programming, does that mean that 'Functional Programming' is the next evolution Object orientation and functional programming are just two of the many different programming paradigms , they are different styles of programming with different concepts and abstractions. The key word is "different". There isn't a single paradigm that's better than others or more evolved than others, each and every one fits some scenarios better than the others. Some may be quite older in origin than others, but in evolutionary terms that makes them better, as they have survided longer. But that's not a very smart way of looking at it. Javascript, as I described above and as quite a few other languages, is multi-paradigm. It allows you to write code in imperative, prototype based object oriented and functional style. It's up to you to choose which one best fits whatever you are building. There are also several single paradigm languages, the canonical example being Java, which only allows for class based object oriented programming 1 . You should really resist any urge to treat languages & paradigms as fashion statements. There's an abudance of crap out there, mostly written by fanboys / fangirls or marketing people, with little (if any) knowledge and understanding of programming. Terms like "better", "more evolved" etc, simply don't apply. I learn best through examples, maybe someone could show performing the same task in a OOP way & then in a Functional Programming way for myself to understand & compare what functional programming does/is. That would be a terrible way to learn. Functional and object orientation are quite different styles, and any example other than terribly simple ones would not fit one or the other style. 1 But lately tries to expand its scope to generic programming, let's see how that goes. In conclusion: Concentrate on learning Javascript, it's a beautiful and extremly useful language. Learn the language, not the hype. Quite a few different paradigms, all equally useful. Up to you to choose which one you prefer and which one fits best whatever you're building. If you want to learn functional programming, choose a more suited language, like Scheme or Clojure . But you'll first need to understand the mathematical concepts involved. Do some research before you ask. Most of your questions are answered by the relevant Wikipedia articles. Knowing how to research and how to ask is an extremely important skill for any programmer. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/127672",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/44053/"
]
} |
127,706 | This is something that bothered me a lot at school. Five years ago, when I learned SQL, I always wondered why we first specify the fields we want and then where we want them from. According to my idea, we should write: From Employee e
Select e.Name So why does the norm say the following? Select e.Name -- Eeeeek, what does e mean?
From Employee e -- Ok, now I know what e is It took me weeks to understand SQL, and I know that a lot of that time was consumed by the wrong order of elements. It is like writing in C#: string name = employee.Name;
var employee = this.GetEmployee(); So, I assume that it has a historical reason. Why? | Originally SQL language was called SEQUEL standing for Structured English Query Language with the emphasize on English , assuming it to be close in spelling to natural language. Now, spell these two statements as you'd spell English sentences: "From Employee table e Select column e.Name" "Select column e.Name From Employee table e" Second sounds closer to natural English language that's why it is set as norm. BTW same reasoning goes to Where etc - SQL statements were intentionally designed to sound close to natural language. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/127706",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/10050/"
]
} |
127,735 | I am a C, C++ developer. I am interested in mobile development. I want to know how can I develop Android apps using C and C++, I have read that they are providing a kit for C, C++ developers but it does not have all functions as of Java kit. Should I go for C/C++ development kit or it's better to learn java as they may not provide all the functionality in future? | Short version : working with C++ on Android is possible and easier with each Android SDK/NDK version, but it's harder than working with Java. Long version : For each version, Google adds more functionalities to Android Native Development Kit and makes it more and more independant on the Java code. Read http://developer.android.com/sdk/ndk/overview.html for more details: Write a native activity, which allows you to implement the lifecycle
callbacks in native code. The Android SDK provides the NativeActivity
class, which is a convenience class that notifies your native code of
any activity lifecycle callbacks (onCreate(), onPause(), onResume(),
etc). You can implement the callbacks in your native code to handle
these events when they occur. Applications that use native activities
must be run on Android 2.3 (API Level 9) or later. You cannot access
features such as Services and Content Providers natively, so if you
want to use them or any other framework API, you can still write JNI
code to do so. The problem is just that if you use the most recent NDK, you'll not be able to deploy and a lot of not-recent Android versions. Anyway even with previous NDK versions, you can have minimal Java code (for interacting with the OS) and the full application code in C++ or anything native. There are also efforts in helping native developers to work fully in C or C++ via IDE plugins like this Vs-Android that is a plugin for Visual Studio 201x hiding all the compilation and generation process from you : http://code.google.com/p/vs-android/ Also, if you plan do port your application to other OS, going with C++ for the core of your application (maybe with a scripting language on top) is a good idea. It's just more expensive on development time than other alternatives - for reasons specific to C++ and it's available dev tools implementations, for example too much compilation times can kill your effective productivity. That being said, that is not the most easy way to work on mobile apps. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/127735",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/-1/"
]
} |
127,763 | I started my programming career with BASIC, during 9th grade. I learned a bit of BASIC by writing simple programs to add, subtract and to print. Then I went to the university and took Computer Information and Systems Engineering. In the first year I was taught C, and I have good command over it. Next I learned C++ in the second year. It just taught me some knowledge of OOP. Now I am doing PHP (along with HTML). I have not mastered C++, BASIC or PHP. I am now planning to move to mobile development. But I feel that I have not covered everything in the languages I learned. Does it really matter? | We're all just learning bits of programming languages. I would only consider the language implementers to be those who are a 10 out of 10 in the knowledge of a language. Learning multiple languages, and paradigms, is the only way to develop a "taste" for what you like and don't like. If you only learned one language, you wouldn't even be able to really decide whether you even like it or not. You're actually doing it the correct way. You will be able to reuse the most important fundamentals you learn in each while getting exposure to different syntax, libraries, and frameworks. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/127763",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/-1/"
]
} |
127,876 | Serious help needed here. I love programming. I've been reading bunch of books(such as K&R) and articles/forums online for C language lately. Even tried looking into Linux code(although, i was lost where to start but peeking into small libraries helped?). I started as a Java programmer and in Java it's pretty cut and dry; If programs gets too big, slice it in classes then further into functions. Guidelines like, keep the code readable and add comments. Use information hiding and OOP techniques. Some of which still applies for C. I've been coding in C now and so far i get programs to work one way or the other. A lot of people talk about performance/efficiency, algorithm/design, optimization, and maintainability. Some people stress one more then the other but for non-professional software engineers you often hear something like e.g: Linux kernel dev won't just take any code. My question is this: I plan on writing a code for 8-bit microcontroller without wasting any resources . Know that i'm coming from java background so things are not the same anymore... resources/books/links/tips will be much appreciated. Performance and size now matters. Resources/Tricks on efficient(within best practices) C code for 8-bit micro-controllers? Also, inline assembly plays a vital role as well sticking close to micro-controllers standard. But are there general rule of thumb to efficiency that applies to all? For example: register unsigned int variable_name; is preferred over char anytime. Or using uint8_t if you don't need big numbers. EDIT: Thank you so much for all the answers and suggestions. I appreciate everyone's effort for sharing knowledge. | I have 20+ years of embedded systems, mostly 8 and 16 micros. The short answer to your question is the same as any other software development - don't optimise till you know you need to, and then don't optimise till you know what you need to optimise. Write you code so it's reliable, readable and maintainable first. Premature optimisation is as much, if not more of, a problem in embedded systems When you program "without wasting any resources.", do you consider your time a resource? If not, who is paying you for your time, and if no one, do you have anything better to do with it. Once choice any embedded system designer has to make is the cost of hardware vs cost of engineering time. If you will be shipping 100 units, use a bigger micro, at 100,000 units, a $1.00 saving per unit is the same as 1 man year of software development (ignoring time to market, opportunity cost etc), at 1 Million units, you start getting ROI for being obsessive about resource usage, but be careful because many an embedded project never made the 1 million mark because they designed to sell 1 million (High initial investment with low production cost), and went bust before they got there. That said, things you need to consider and be aware of with (small) embedded systems, because these will stop it working, in unexpected ways, not just make it go slow. a) Stack - you usually have only a small stack size and often limited stack frame sizes. You must be aware of what your stack utilisation is at all times. Be warned, stack problems cause some of the most insidious defects. b) Heap - again, small heap sizes so be careful about unwarranted memory allocation. Fragmentation becomes an issue. With these two, you need to know what you do when you run out - it does not happen on a large system due OS provided paging. i.e. When malloc returns NULL, do you check for it and what do you do. Every mallow needs a check and handler, code bloat?. As a guide - don't use it if there’s an alternative. Most small systems do not use dynmaic memory for these reasons. c) Hardware interrupts - You need to know how to handle these in a safe and timely manner. You also need to know how to make safe re-entrant code. For instance, C standard libs are generally not re-entrant, so should not be used inside interrupt handlers. d) Assembly - almost always premature optimisation. At most a small amount (inlined) is needed to achive something that C just cannot do. As an exercise, write a small method in hand crafted assembly (from scratch). Do the same in C. Measure the performance. I bet the C will be faster, I know it will be more readable, maintainable and extendable. Now for part 2 of the exercise - write a useful program in assembly and C. As another exercise, have a look how much of the Linux kernal is assembler, nthen read ,the paragraph below about the linux kernel. It is worth knowing how to do it, it might even be worth being proficient in the languages for one or two common micros. e) "register unsigned int variable_name", "register" is, and always has been, a hint to the compiler, not an instruction, back in the early 70's (40 years ago), it made sense. In 2012, it's a waste of keystrokes as the compilers are so smart, and micros instructions sets so complex. Back to your linux comment - the problem you have here is that we are not talking a mere 1 million units, we are talking 100's of millions, with a life time of forever. The engineering time and cost to get it as optimal as humanly possible is worth while. Although a good example of very best engineering practise, it would be commercial suicide for most embedded systems developers to be as pedantic as the linux kernal requires. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/127876",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/43388/"
]
} |
128,082 | Are there any specific reasons for a developer that deals with web applications (let's say writing html and js) to download a browser's source code (like Chromium) and learn how the engine works (renderer, javascript vm, network processing, etc.)? | It is more important to understand HTTP, client server, web standards and specifications (HTML 4, XHTML, HTML 5, CSS 2.0, CSS 3.0, Javascript) and the differences between the different browsers and browser versions. Understanding the inner workings of a single browser engine can be useful in the same way that understanding how an engine works will help a driver get the most out of his car, but some of the knowledge will not transferable to other browsers. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/128082",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/4701/"
]
} |
128,176 | I'm aware of some general best practices when designing a database for an application, but what about redesigning? I'm on a team tasked with re-designing an internal business application, though despite me saying "internal," I'm unfortunately many, many layers of people away from contact with the actual users of the system. The current program is in Oracle Forms, scattered across a bunch of non-normalized tables, sometimes with multiple near-duplicate tables holding slight variants on each others' data. The constraints are often in the form of poorly-enforced stored procedures. Even the types don't seem to be stored right. I've encountered all sorts of bad data that Oracle seems to ignore but gave fits (and rightly so) to SQL Server's Import/Export Wizard. (For example, two digit integers do not constitute a complete datetime!) The original program probably goes back twenty years, and all of the original developers have retired so long ago that even the older people here have no idea who they were. As a result, there also aren't really any clean requirements to go off of--we're just supposed to duplicate the existing application's functionality and keep its existing data. The end result of the rewrite is going to be a web-based version running on ASP.NET with MS SQL Server for the back end. My other two developer teammates are much, much older than me, both with business/MIS backgrounds whereas mine is CS. The senior member's experience has been almost exclusively Oracle forms and the other member has mostly done business applications work in Visual Basic. Although my database background has been limited to designing new databases for projects in MySQL or SQLite, mostly for my undergrad classes, I seem to be the only one with experience actually designing databases at all. I've already written a little program in C# that reads in all the existing data to a neutral format, ready to be re-cast and placed into a new database. I plan to write the load-in code after the destination database is designed, so that data can be properly split across the new normalized tables, added in the correct order to follow new constraints, etc. The same program could then be run again later to copy the production data to the real newly deployed finished redesign. This leaves the actual redesign of the database as the main thing to figure out. So the heart of my question: what are some best practices for doing a redesign from the database level up of an existing application? | I think you already know how to normalize a database. What you need are strategies for minimizing the risk when moving all of the software to the new database. What I'm suggesting is more work as a trade-off for less risk. Normalize the database, and create a process to populate the normalized database with data from the original database. The original database will be the database for inserts, updates, and deletes. The normalized database will be the query database only during the conversion. Your populate process will have to run as often as the need for query data. If day old data is acceptable, you can run a nightly populate process. If you need more current data, you have to run a continuous populate process. Build the query portion of your new ASP.NET system, pointing to the new normalized database. The query results from your new system should compare with the query results from the original system. You could stop at this point. That's a business decision, not a technical decision. At your leisure, you create new insert / update / delete functionality in your new ASP.NET system. As you create the new functionality, you turn off the parts of the original system that correspond. At some point, nothing of the original system remains. The advantages of converting in this manner are reducing risk by building the query portion first. Generally the query functions are simple compared to the business logic embedded in insert / update / delete functionality. You convert the insert / update / delete functionality one process at a time. If there's a problem with misunderstanding the business logic, it can be fixed while your users are using the original system. It should go without saying that your populate process better be absolutely consistent. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/128176",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/30150/"
]
} |
128,263 | I'm currently in my placement year and working for a great software development company. It was always my intention of getting to this stage through university, getting enough academic experience as well as the year’s placement and then try to get a full time programming job without the need to finish my degree. I decided this from an early stage as I have never really liked the whole university environment. I was so unhappy at university and I’m so happy now I’m on my placement year, I really don’t know if I can go back. My question is, do you think companies will take me on if I apply for other jobs after my placement year and not penalize me for not finishing my degree? I guess at the end of the day I don't want to look back on my life and think "god, why didn't I just spend one more year being unhappy to have a job I love" but I know that even if I get a degree I could still end up without a programming job and this worries me more than anything. | Every company and every hiring manager is different. Some will value hands-on experience more than a degree, but many will not look past the lack of a degree, especially in large companies where hiring is done by a HR department. Basically lack of a degree: will be seen neutral to slightly positive in most small startups will matter little when you get a job via personal recommendation will get your resume thrown out at a very early stage when applying through the regular channels at most large companies (and many smaller ones) Overall, I'd say it's a considerable (but not unsurmountable) obstacle to getting a job. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/128263",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/44351/"
]
} |
128,389 | Is it considered bad practice to throw NotImplementedException for code you haven't written yet? Possibly TODO comments would be considered safer? | I believe NotImplementedException is actually a good practice. Indeed, if you forget to implement a method, and you use it later on in your project (and believe me, it happens), you might spend a long time debugging looking for what went wrong step by step. If you have the exception, the program will stop directly, prompting the exception (if you catch the exception, you'll find it quickly by looking what exception you caught). I would recommend using the NotImplementedException combined with TODO comments, that way you combine GUI help (with the tasks in VS) and program safety. For the release version, it's even more important in my opinion, as in most cases you would prefer your program to crash rather than having a program apparently working properly but producing erroneous results. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/128389",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/29270/"
]
} |
128,492 | I'm a growing programmer who's finally putting unit testing into practice for a library that I'm storing on GitHub. It occurred to me that I might include the test suites in the repo, but as I look around at other projects, the inclusion of tests seems hit-or-miss. Is this considered bad form? Is the idea that users are only interested in the working code and that they'll test in their own framework anyway? | You definitely should put your tests into the repository. Tests are in my opinion part of the code and can help others immensely to understand it (if well written). Besides, they can help others when changing or contributing to your codebase. Good tests can give you the confidence that your changes do not inadvertently break anything. The test code should be well separated from production code, though. Maven for example achieves this by putting production and test code into different folders. The question "is this file part of production or of the test code" should never arise. I personally do not write unit tests for used libraries in my own code. I expect them to be working (at least when I use a release version, though bugs obviously can appear). It gets some test coverage in integration tests, but that's not enough. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/128492",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/12206/"
]
} |
128,512 | I'm putting together a spec for a REST service, part of which will incorporate the ability to throttle users service-wide and on groups of, or on individual, resources. Equally, time-outs for these would be configurable per resource/group/service. I'm just looking through the HTTP 1.1 spec and trying to decide how I will communicate to a client that a request will not be fulfilled because they've reached their limit. Initially I figured that client code 403 - Forbidden was the one, but this, from the spec: Authorization will not help and the request SHOULD NOT be repeated bothered me. It actually appears that 503 - Service Unavailable is a better one to use - since it allows for the communication of a retry time through the use of the Retry-After header. It's possible that in the future I might look to support 'purchasing' more requests via eCommerce (in which case it would be nice if client code 402 - Payment Required had been finalized!) - but I figure that this could equally be squeezed into a 503 response too. Which do you think I should use? Or is there another I've not considered? | 429 Too Many Requests The user has sent too many requests in a given amount of time. Intended for use with rate limiting schemes. This code has been accepted in RFC 6585 Additional HTTP Status Codes . The 429 status code indicates that the user has sent too many
requests in a given amount of time ("rate limiting"). The response representations SHOULD include details explaining the
condition, and MAY include a Retry-After header indicating how long
to wait before making a new request... Note that this specification does not define how the origin server
identifies the user, nor how it counts requests. For example, an
origin server that is limiting request rates can do so based upon
counts of requests on a per-resource basis, across the entire server,
or even among a set of servers. Likewise, it might identify the user
by its authentication credentials, or a stateful cookie. Responses with the 429 status code MUST NOT be stored by a cache... | {
"source": [
"https://softwareengineering.stackexchange.com/questions/128512",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/33721/"
]
} |
128,520 | A linked list can be used when you want cheap insertion and deletion of elements and when it doesn't matter that the elements aren't next to each other in memory. This is very abstract and I would like a concrete explanation of why a linked list should be used rather than an array. I'm not very experienced with programming, so I haven't got much (if any) real-world experience. | Here is something part way between an example and an analogy. You have some errands to do, so you grab a piece of paper and write: bank groceries drop off drycleaning Then you remember that you also need to buy stamps. Because of the geography of your town, you need to do that after the bank. You could copy your whole list onto a new piece of paper: bank stamps groceries drop off drycleaning or you could scribble on the one you had: bank ....... STAMPS groceries drop off drycleaning As you thought of other errands, you might write them at the bottom of the list, but with arrows reminding yourself what order to do them in. This is a linked list. It's quicker and easier than copying the whole list around every time you add something. Then your cell phone rings while you're at the bank "hey, I got the stamps, don't pick up any more". You just cross STAMPS off the list, you don't rewrite a whole new one without STAMPS in it. Now you could actually implement an errands list in code (maybe an app that puts your errands in order based on your geography) and there's a reasonable chance you would actually use a linked list for that in code. You want to add and remove lots of items, order matters, but you don't want to recopy the whole list after each insertion or deletion. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/128520",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/-1/"
]
} |
128,712 | It's unclear to me what the format is, if there's systematic/command-line requirements for creating it, etc. Basically, just to need to know the specs, and if there's technical steps for generating the README file. | Markdown is a simple syntax for providing semantic info and representing common formatting in plain text. Daring Fireball has a awesome syntax guide for standard markdown . GitHub then uses a variant of this that they call GitHub Flavored Markdown . To set up your readme just create a plain text file and name it README (or README.md / README.markdown) and commit it to the root of your repo. GitHub will then pick this up as the project readme. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/128712",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/4942/"
]
} |
128,749 | I'm currently reviewing a system built by some developers that previously worked at my job. The system works pretty well from a user's point of view, but when delving into code review it's a utter mess. I'm more than convinced that the way the application is built won't hold up for future updates, let alone high increases in usage. The problem is that I know how bad it is, but my superiors don't. How can I prove it to my manager so he actually sees the problem and can be convinced to do minimal triage on the current codebase, and in the near future start a new line of development for the next version of the application? | 'But it works now' is the standard management response to the legitimate frustrations of software engineers. The first thing I would do would be to compile the documentation (if any) and use that to demonstrate contradictions between the code and the documentation. If you can, put together a comprehensive suite of unit tests. Run these with every change so you can document regressions which can be blamed on the existing codebase. Lastly, if you can pull in a developer from another department whose work you trust, get a second pair of eyes on the code. One developer saying 'this is crap' is easier to dismiss, than when a senior developer who has been around a while vouches for him and says, 'No, Jim, he's right. This is crap on a crap cracker.' Of course, it all depends on your environment, company size, etc. I always recommend taking a look at The Pragmatic Programmer If you haven't read it. Not only should it be required reading for every software professional, but it has some good suggestions for dealing with management, co-workers, users, etc. who don't view software engineering as a craft. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/128749",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/2478/"
]
} |
128,860 | How does one prevent users from creating erroneous input sets, when there is no practical way to vet the input? The scene I modify a small ERP package written in Visual FoxPro. One part of the package concerns itself with printing truck manifests and invoices to be sent with the drivers on their delivery routes. The print routine, when fed nothing as an input, will attempt to print everything, resulting in reams and reams of printer paper being wasted on a high-speed printer. I am not in a position to re-write any of the GUI interface elements, nor can I adapt any frameworks, tool kits, or other outside code to be used in this situation. The reasons are related to office politics, please do not suggest that I can override the existing ERP framework, as it is not an option for me. The issue The users are in a high-pressure, time-critical environment. Each process is measured in minutes or even seconds, which means that I have to minimize processing time as much as possible. Because of this environment, and possible distractions, users frequently ignore the dialogs, pressing the [Enter] key which causes the focus to rapidly move through the form and eventually landing on the action button for the input dialog, resulting in them triggering an automatic printout. The input consists of a date range, route range, and sales order range. The input for date range cannot be auto-set to "today's date", as frequent back-printing is required. Also, the end-users work during midnight, i.e. date rollover makes this impractical without rigging a routine that auto-detects the change, etc. The input for routes cannot be hard-coded, nor can it be deduced from routes that have already been shipped, because re-prints are required (see above). The input for sales orders only has meaning when printing single orders or specific ranges. So, frankly, there is no practical way to validate input . The action button that triggers printing cannot be blocked. Any suggestions that a blocking dialog be placed in front of the user will be ignored. I am not at liberty to discuss why this is not an option, other than the concept has been already discussed elsewhere on the site (from a different vantage point) and was rejected. Blocking printouts when all inputs are empty was rejected as a design decision as the software must accommodate this as a feature. The users The users have been repeatedly asked not to do this. They frequently ignore this advice. Triggering this unfortunate event is not something that their foremen/managers will address, so there is no pressure to end the behavior. The organization I do not have a say in the workflow involved, only the modification of existing software components affected by that workflow. The vendor The vendor second-sources the package as a custom installation from the original software vendor. The vendor requires that all code changes be sent back to them for integration into their codebase. Significant changes to architecture will result in increased future costs during version migrations due to the extensive customization involved; in some cases, the programmers have even told me that they will completely ignore such large changes and will do as they please. The software I have no say in the selection or installation of the software, so changing the platform is out of the question. Regarding the environment of the software, each invoice printed is a single call. There isn't a batch printing facility, and because of how the print facility is integrated into the system (and some language quirks as well) it isn't feasible to make a batch wrapper around that API. Topping this off, this part of the program calls another program that does the invoice print, which in turn calls the print-a-report API, which prints a single invoice. Horrid design, I know. Input forms are a weird combination of a form header that is devoid of input boxes, but can contain other GUI elements. Input boxes are defined at runtime. The objective The software will prevent the users from erroneously printing all paperwork. How would you solve this issue? | This is easy, if they leave everything blank you prompt that this will print everything, however, the DEFAULT selection in that prompt MUST be to cancel. If they enter values, print whatever they asked for. This way, they won't accidentally blaze through the form and print everything. They'll blaze through and print nothing. They would need to pause, and print change the selection at the prompt to OK, from Cancel, to print everything. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/128860",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/8784/"
]
} |
129,123 | I am reading the book The Elements of Computing Systems: Building a Modern Computer from First Principles , which contains projects encompassing the build of a computer from boolean gates all the way to high level applications (in that order). The current project I'm working on is writing an assembler using a high level language of my choice, to translate from Hack assembly code to Hack machine code (Hack is the name of the hardware platform built in the previous chapters). Although the hardware has all been built in a simulator, I have tried to pretend that I am really constructing each level using only the tools available to me at that point in the real process. That said, it got me thinking. Using a high level language to write my assembler is certainly convenient, but for the very first assembler ever written (i.e. in history), wouldn't it need to be written in machine code, since that's all that existed at the time? And a correlated question... how about today? If a brand new CPU architecture comes out, with a brand new instruction set, and a brand new assembly syntax, how would the assembler be constructed? I'm assuming you could still use an existing high level language to generate binaries for the assembler program, since if you know the syntax of both the assembly and machine languages for your new platform, then the task of writing the assembler is really just a text analysis task and is not inherently related to that platform (i.e. needing to be written in that platform's machine language)... which is the very reason I am able to "cheat" while writing my Hack assembler in 2012, and use some preexisting high level language to help me out. | for the very first assembler ever written (i.e. in history), wouldn't it need to be written in machine code Not necessarily. Of course the very first version v0.00 of the assembler must have been written in machine code, but it would not be sufficiently powerful to be called an assembler. It would not support even half the features of a "real" assembler, but it would be sufficient to write the next version of itself. Then you could re-write v0.00 in the subset of the assembly language, call it v0.01, use it to build the next feature set of your assembler v0.02, then use v0.02 to build v0.03, and so on, until you get to v1.00. As the result, only the first version will be in machine code; the first released version will be in the assembly language. I have bootstrapped development of a template language compiler using this trick. My initial version was using printf statements, but the first version that I put to use in my company was using the very template processor that it was processing. The bootstrapping phase lasted less than four hours: as soon as my processor could produce barely useful output, I re-wrote it in its own language, compiled, and threw away the non-templated version. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/129123",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/43808/"
]
} |
129,296 | What are the top reasons to write obfuscated code, in terms of a real benefit to the people developing the code, and the business that runs that code (if the code in question is in fact commericial code)? Are there documented cases (available online in some location) which describe when obfuscation did more good than bad? Are there well-known examples where, for example, obfuscation was proven to meaningfully delay a malicious 3rd party from getting at the code? It seems that, just like rolling up your car windows won't stop people from breaking them and stealing your stereo, obfuscating your code just keeps honest people honest. ========= Background: This is an attempt to purposely challenge my assumptions on this topic. I'm big-time against using code obfuscation in general, but I'm curious if I'm missing something. I get why, in cases like JavaScript, minification helps things load faster and all (there's a real, functional benefit there), but I can't seem to come up with a single reason why code obfuscation, for the purpose of being an obstacle to discovering what an section of code/algorithm does , is actually effective for any purpose whatsoever. With open source being crazy popular, the question seems to be "share the code, or keep it proprietary?" When it comes to commercial code, I can understand why you can't share everything, and you've got the law in your side to fight theft. BTW, if the reason someone is writing obfuscated code is "job security" then I would fire any programmer found to be consistently, and purposely using obfuscation with the sole purpose of helping to keep their jobs, unless they could reasonably show that it had some business benefit. It's so completely anti-team that it's ridiculous, and points to someone that's more concerned with keeping their job through misguided practices, then keeping it because they write awesome software. I only mention this specific case because, while I realize people are usually joking, I'd like to deter any answers whose basic thrust is that obfuscation for job security alone is a good idea. | One very interesting use case for obfuscation is tracing the origin of illicit copies. Assuming that obfuscation is a relatively cheap operation the original author can supply each client with differently obfuscated versions of the application, if an illicit copy is found the author can compare with supplied versions and trace back the source of the piracy. That's a form of steganography , inspired and in variation of the "traitor tracing" cryptographic schemes . I have no idea if it's common 1 , or even if it's a good idea, but I've seen it applied in practice under the following parameters: Highly competitive nationwide market with just two vendors, About 50 deployments covered the market, Average development time for both applications was a couple of years (more or less), Average obfuscation time for our application was a couple of hours, Lifespan for both applications was expected to be about ten years. The rationale was of course security through obscurity initially, and it evolved at the aforementioned scheme at some point 2 . Both vendors had access to each other's binary code, legally, and I think it's obvious that decompilation attempts from both were expected. Obfuscation did nothing in terms of security, in the long run. Both vendors had highly motivated and talented teams, working in an extremely profitable and niche market, in the end our products were more similar than not, and any competitive advantage was gained through other, less obscure means. I can't really expand, because (a) it was very early in my career and I didn't get a clear overview of the design decisions or the results of the tracing scheme (if any) and (b) some of my involvement with the project was under a NDA. Another valid use case for obfuscation could be when you are somehow legally obliged to submit your code to a third party : If your firm does IP work for technology companies, or is involved in cases involving software source code, you may be obliged to submit your client’s source code to the USPTO, a court or third party. Since source code is considered a trade secret, most regulatory agencies use a "50%" rule. Source code submitted is obscured so that it cannot be used as-is. IANAL, and the link is more relevant to hard copies of code rather than actual working code, so this might be completely irrelevant. Now, as Javascript is the canonical example for obfuscation, there's one side-effect that's not commonly considered, and that's hiding malicious code in obfuscated Javascript. Although there are definite advantages in minifying 3 Javascript, I don't see any point in actual obfuscation and I'm happy Douglas Crockford agrees with me : Then finally, there is that question of code privacy. This is a lost cause. There is no transformation that will keep a determined hacker from understanding your program. This turns out to be true for all programs in all languages, it is just more obviously true with JavaScript because it is delivered in source form. The privacy benefit provided by obfuscation is an illusion. If you don’t want people to see your programs, unplug your server. As for obfuscation for "job security", that's a behaviour that should never pass code review, and if identified it shouldn't be tolerated. I wouldn't go as far as firing the culprit at first, but repeat offenders definitely deserve a good spanking, at least. In conclusion, obfuscation is a typical example of security through obscurity, it's only obvious merit is as a deterrent and nothing more. There might be creative use cases 4 I don't know of, but in general the benefits are minimal, at best. 1 After writing this I found out this answer which basically describes the same scheme, so it might be more common that I thought. 2 Although steganography is still security through obscurity. 3 Minification ~ removing whitespace and shortening tokens, not intentionally obscuring. 4 Does the International Obfuscated C Code Contest count? | {
"source": [
"https://softwareengineering.stackexchange.com/questions/129296",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/26506/"
]
} |
129,305 | Recently I was faced a question about whether a simple calculation should be put in the Entity layer, or should the Entity be pure for just storing the raw data and leave the calculation logics in the business layer. So my question is whether it is sensible to encapsulate simple calculations in the properties in an entity class? | It depends on the type of architecture you want. In Domain Driven Design, you would create a Domain Model that would have both data and functionality. This would mean that an Order has a property (or method) that would return the total price of the order based on the OrderLines . The Order would also have a method AddOrderItem(Product product, int amount) and the Order would check if there is already an OrderLine for that specific product. In such a model you would also have objects that are not real entities, like a Repository for accessing data or a Factory for creating entities. These are called Domain Services. An Application Layer is responsible for calling the Domain Services (for example to retrieve an entity from the database) and then it will execute functionality on the entity. The Application Layer should be as thin as possible. This is a nice article about DDD which explains these concepts in more detail. You can also use an Anemic Domain Model . That means that your entities consist of get/set properties and contain no behavior. In such a design, your Business Layer will contain the behavior, such as calculating the Order price and checking for duplicate OrderLines . There are different opinions whether an Anemic Domain Model is a bad thing. Personally I prefer a real Domain Model. This article describes the differences between an Anemic and non-Anemic Domain Model. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/129305",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/44887/"
]
} |
129,327 | I have been working as a software developer for many years now. It has been my experience that projects get more complex and unmaintainable as more developers get involved in the development of the product. It seems that software at a certain stage of development has the tendency to get "hackier" and "hackier" especially when none of the team members that defined the architecture work at the company any more. I find it frustrating that a developer who has to change something has a hard time getting the big picture of the architecture. Therefore, there is a tendency to fix problems or make changes in a way that works against the original architecture. The result is code that gets more and more complex and even harder to understand. Is there any helpful advice on how to keep source code really maintainable over the years? | The only real solution to avoid code rot is to code well! How to code well is another question. It's hard enough even if you're an excellent programmer working alone. In a heterogeneous team, it becomes much harder still. In outsourced (sub)projects... just pray. The usual good practices may help: Keep it simple. Keep it simple. This applies especially to the architecture, the "big picture". If developers are having hard time to get the big picture, they are going to code against it. So make the architecture simple so that all the developers get it. If the architecture has to be less than simple, then the developers must be trained to understand that architecture. If they don't internalize it, then they shouldn't code in it. Aim for low coupling and high cohesion . Make sure everyone in the team understands this idea. In a project consisting of loosely coupled, cohesive parts, if some of the parts becomes unmaintainable mess, you can simply unplug and rewrite that part. It's harder or near impossible if the coupling is tight. Be consistent. Which standards to follow matters little, but please do follow some standards. In a team, everyone should follow the same standards of course. On the other hand, it's easy to become too attached with standards and forget the rest: please do understand that while standards are useful, they are only a small part of making good code. Don't make a big number of it. Code reviews may be useful to get a team to work consistently. Make sure that all tools - IDEs, compilers, version control, build systems, documentation generators, libraries, computers , chairs , overall environment etc. etc. - are well maintained so that developers don't have to waste their time with secondary issues such as fighting project file version conflicts, Windows updates, noise and whatever banal but irritating stuff. Having to repeatedly waste considerable time with such uninteresting stuff lowers the morale, which at least won't improve code quality. In a large team, there could be one or more guys whose main job is to maintain the developer tools. When making technological decisions, think what it would take to switch the technology; which decisions are irreversible and which are not. Evaluate the irreversible decisions extra carefully. For example, if you decide to write the project in Java , that's a pretty much irreversible decision. If you decide to use some self-boiled binary format for data files, that's also a fairly irreversible decision (once the code is out in the wild and you have to keep supporting that format). But colors of the GUI can easily be adjusted, features initially left out can be added later on, so stress less about such issues. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/129327",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/625/"
]
} |
129,407 | I'm a sole developer at my work and while I understand the benefits of VCS; I find it hard to stick to good practices. At the moment I'm using git to develop mostly web apps (which will never be open sourced due to my work). My current workflow is make lots of changes to the development site, test, revise, test, be happy and commit changes, and then push the commit to the live site (so if I'm working on a big new change; I may only commit once a week; but my IDE has a good undo history for uncommitted stuff). Basically, I'm only using git when switching between machines (e.g., work dev computer to home dev computer or to live machine), but during the day I don't really see the benefit. This leads me to have long laundry lists of changes (and I have trouble finding good msg for each commit; and whenever I'm in a rush - I tend to leave crappy messages like 'misc changes to admin and templates'). How often should I be committing? Should each one-line change get a commit? Should I commit before any test (e.g., at least for syntax/compiling errors and then have to totally undo it; as the idea didn't work or the message is a lie)? Should I make sure I commit each morning/afternoon before I stop working for dinner while its still fresh? What I am missing out on by having bad VCS habits? | You are missing a lot. I'm solo too, in a way. I commit every time a make a significant change, or before I start a significant one so I can go back if I screw things up, and every now and then even if I'm not making anything big. Not everyday really, but close. Sometimes a few times a day. What I get is that I can go back anytime I want. Which is a lot. Also, having the branches ordered helps. I guess it gives me a lot of order. I'm using svn, and I'm getting sick of it. But cannot spend more time learning anythings else. Good luck. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/129407",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/13719/"
]
} |
129,530 | Or in other words, what specific problems did automated garbage collection solve? I've never done low-level programming, so I don't know how complicated can freeing resources get. The kind of bugs that GC addresses seem (at least to an external observer) the kind of things that a programmer that knows well his language, libraries, concepts, idioms, etc, wouldn't do. But I could be wrong: is manual memory handling intrinsically complicated? | I've never done low-level programming, so I don't know how complicated
can freeing resources get. Funny how the definition of "low-level" changes over time. When I was first learning to program, any language that provided a standardized heap model that makes a simple allocate/free pattern possible was considered high-level indeed. In low-level programming , you'd have to keep track of the memory yourself, (not the allocations, but the memory locations themselves!), or write your own heap allocator if you were feeling really fancy. Having said that, there's really nothing scary or "complicated" about it, at all. Remember when you were a child and your mom told you to put away your toys when you're done playing with them, that she is not your maid and wasn't going to clean up your room for you? Memory management is simply this same principle applied to code. (GC is like having a maid who will clean up after you, but she's very lazy and slightly clueless.) The principle of it is simple: Each variable in your code has one and only one owner, and it’s the responsibility of that owner to free the variable’s memory when it is no longer needed. ( The Single Ownership Principle ) This requires one call per allocation, and several schemes exist that automate ownership and cleanup in one way or another so you don't even have to write that call into your own code. Garbage collection is supposed to solve two problems. It invariably does a very bad job at one of them, and depending on the implementation may or may not do well with the other one. The problems are memory leaks (holding on to memory after you're done with it) and dangling references (freeing memory before you're done with it.) Let's look at both issues: Dangling references: Discussing this one first because it's the really serious one. You've got two pointers to the same object. You free one of them and don't notice the other one. Then at some later point you attempt to read (or write to or free) the second one. Undefined behavior ensues. If you don't notice it, you can easily corrupt your memory. Garbage collection is supposed to make this problem impossible by ensuring that nothing is ever freed until all references to it are gone. In a fully-managed language, this almost works, until you have to deal with external, unmanaged memory resources. Then it's right back to square 1. And in a non-managed language, things are trickier still. (Poke around on Mozilla's bug-tracker for Firefox sometime and see if you can find how many different crash bugs were caused by their garbage collector screwing up and freeing things too early!) Fortunately, dealing with this issue is basically a solved problem. You don't need a garbage collector, you need a debugging memory manager. I use Delphi, for example, and with a single external library and a simple compiler directive I can set the allocator to "Full Debug Mode." This adds a negligible (less than 5%) performance overhead in return for enabling some features that keep track of used memory. If I free an object, it fills its memory with 0x80 bytes (easily recognizable in the debugger) and if I ever attempt to call a virtual method (including the destructor) on a freed object, it notices and interrupts the program with an error box with three stack traces--when the object was created, when it was freed, and where I am now--plus some other useful information, then raises an exception. This is obviously not suitable for release builds, but it makes tracking down and fixing dangling reference issues trivial. The second issue is memory leaks. This is what happens when you continue to hold on to allocated memory when you no longer need it. It can happen in any language, with or without garbage collection, and can only be fixed by writing your code right. Garbage collection helps to mitigate one specific form of memory leak, the kind that happens when you have no valid references to a piece of memory that has not yet been freed, which means the memory stays allocated until the program ends. Unfortunately, the only way to accomplish this in an automated manner is by turning every allocation into a memory leak! I'm probably going to get dinged by GC proponents if I try to say something like that, so allow me to explain. Remember that the definition of a memory leak is holding on to allocated memory when you no longer need it. In addition to having no references to something, you can also leak memory by having an unnecessary reference to it, such as holding it in a container object when you should have freed it. I've seen some memory leaks caused by doing this, and they are very difficult to track down whether you have a GC or not, since they involve a perfectly valid reference to the memory and there are no clear "bugs" for debugging tools to catch. As far as I know, there is no automated tool that allows you to catch this type of memory leak. So a garbage collector only concerns itself with the no-references variety of memory leaks, because that's the only type that can be dealt with in an automated fashion. If it could watch all your references to everything and free every object as soon as it has zero references pointing to it, it would be perfect, at least with regards to the no-references problem. Doing this in an automated manner is called reference counting, and it can be done in some limited situations, but it has its own issues to deal with. (For example, object A holding a reference to object B, which holds a reference to object A. In a reference-counting scheme, neither object can be freed automatically, even when there are no external references to either A or B.) So garbage collectors use tracing instead: Start with a set of known-good objects, find all objects that they reference, find all objects that they reference, and so on recursively until you've found everything. Whatever does not get found in the tracing process is garbage and can be thrown away. (Doing this successfully, of course, requires a managed language that puts certain restrictions on the type system to ensure that the tracing garbage collector can always tell the difference between a reference and some random piece of memory that happens to look like a pointer.) There are two problems with tracing. First, it's slow, and while it's happening the program has to be more or less paused to avoid race conditions. This can lead to noticeable execution hiccups when the program is supposed to be interacting with a user, or bogged-down performance in a server app. This can be mitigated by various techniques, such as breaking allocated memory up into "generations" on the principle that if an allocation doesn't get collected the first time you try, it's likely to stick around for a while. Both the .NET framework and the JVM use generational garbage collectors. Unfortunately, this feeds into the second problem: memory not getting freed when you're done with it. Unless the tracing runs immediately after you finish with an object, it will stick around until the next trace, or even longer if it makes it past the first generation. In fact, one of the best explanations of the .NET garbage collector I've seen explains that, in order to make the process as fast as possible, the GC has to defer collection for as long as it can! So the problem of memory leaks is "solved" rather bizarrely by leaking as much memory as possible for as long as possible! This is what I mean when I say that a GC turns every allocation into a memory leak. In fact, there is no guarantee that any given object will ever be collected. Why is this an issue, when the memory still gets reclaimed when needed? For a couple of reasons. First, imagine allocating a large object (a bitmap, for example,) that takes a significant amount of memory. And then soon after you're done with it, you need another large object that takes the same (or close to the same) amount of memory. Had the first object been freed, the second one can reuse its memory. But on a garbage-collected system, you may well be still waiting for the next trace to run, and so you end up unnecessarily wasting memory for a second large object. It's basically a race condition. Second, holding memory unnecessarily, especially in large amounts, can cause problems in a modern multitasking system. If you take up too much physical memory, it can cause your program or other programs to have to page (swap some of their memory out to disc) which really slows things down. For certain systems, such as servers, paging can not only slow the system down, it can crash the whole thing if it's under load. Like the dangling references problem, the no-references problem can be solved with a debugging memory manager. Again, I'll mention the Full Debug Mode from Delphi's FastMM memory manager, since it's the one I'm most familiar with. (I'm sure similar systems exist for other languages.) When a program running under FastMM terminates, you can optionally have it report the existence of all allocations that never got freed. Full Debug Mode takes it a step further: it can save a file to disc containing not only the type of allocation, but a stack trace from when it was allocated and other debug info, for each leaked allocation. This makes tracking down no-references memory leaks trivial. When you really look at it, garbage collection may or may not do well with preventing dangling references, and universally does a bad job at handling memory leaks. Its one virtue, in fact, is not the garbage collection itself, but a side-effect: it provides an automated way to perform heap compaction. This can prevent an arcane problem (memory exhaustion through heap fragmentation) that can kill programs that run continually for a long time and have a high degree of memory churn, and heap compaction is pretty much impossible without garbage collection. However, any good memory allocator these days uses buckets to minimize fragmentation, which means that fragmentation only truly becomes a problem in extreme circumstances. For a program in which heap fragmentation is likely to be a problem, it's advisable to use a compacting garbage collector. But IMO in any other case, the use of garbage collection is premature optimization, and better solutions exist to the problems that it "solves." | {
"source": [
"https://softwareengineering.stackexchange.com/questions/129530",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/24071/"
]
} |
129,537 | Lately I've begun to think that having lots of manager classes in your design is a bad thing. The idea hasn't matured enough for me to make a compelling argument, but here's a few general points: I found it's a lot harder for me to understand systems that rely heavily on "managers". This is because, in addition to the actual program components, you also have to understand how and why the manager is used. Managers, a lot of the time, seem to be used to alleviate a problem with the design, like when the programmer couldn't find a way to make the program Just Work TM and had to rely on manager classes to make everything operate correctly. Of course, mangers can be good. An obvious example is an EventManager , one of my all time favorite constructs. :P My point is that managers seem to be overused a lot of the time, and for no good reason other than mask a problem with the program architecture. Are manager classes really a sign of bad architecture? | Manager classes can be a sign of a bad architecture, for a few reasons: Meaningless Identifiers The name FooManager says nothing about what the class actually does , except that it somehow involves Foo instances. Giving the class a more meaningful name elucidates its true purpose, which will likely lead to refactoring. Fractional Responsibilities According to the single responsibility principle, each code unit should serve exactly one purpose. With a manager, you may be artificially dividing that responsibility. Consider a ResourceManager that coordinates lifetimes of, and access to, Resource instances. An application has a single ResourceManager through which it acquires Resource instances. In this case there is no real reason why the function of a ResourceManager instance cannot be served by static methods in the Resource class. Unstructured Abstraction Often a manager is introduced to abstract away underlying problems with the objects it manages. This is why managers lend themselves to abuse as band-aids for poorly designed systems. Abstraction is a good way to simplify a complex system, but the name “manager” offers no clue as to the structure of the abstraction it represents. Is it really a factory, or a proxy, or something else? Of course, managers can be used for more than just evil, for the same reasons. An EventManager —which is really a Dispatcher —queues events from sources and dispatches them to interested targets. In this case it makes sense to separate out the responsibility of receiving and sending events, because an individual Event is just a message with no notion of provenance or destination. We write a Dispatcher of Event instances for essentially the same reason we write a GarbageCollector or a Factory : A manager knows what its payload shouldn’t need to know. That, I think, is the best justification there is for creating a managerlike class. When you have some “payload” object that behaves like a value, it should be as stupid as possible so that the overall system remains flexible. To provide meaning to individual instances, you create a manager that coordinates those instances in a meaningful way. In any other situation, managers are unnecessary. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/129537",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/27083/"
]
} |
129,543 | In a general sense, for long term projects that may have multiple releases during the products life cycle and require support of previous products, what is the best way to handle product versions and branching of the code base? In a more specific sense, assume that proper distributed version control is in place (i.e. git) and that the teams are small to large in size and that developer may be working on multiple projects at once. The major issue that is being faced is that there is a contractual obligation to support old versions as they existed at the time which means that new development can not patch old code (Microsoft Office products could be an example of this, you only get patches for the feature year you own). As a result the current product versioning is a touch convoluted as each main product has multiple dependencies, each with their own versions which may change between annual releases. Likewise, while each product has its own repository, most of the work is not done on the main source trunk but rather on a branch for that years product release with a new branch being made when the product is released so that it may be supported. This in turn means that getting a product's code base isn't a simple matter as one might think when using version control. | How much (and what kind of) structure you need depends a lot on what you want to be able to do. Figure out what you can't live without, what you want to have, and what you don't care about. A good example set of decisions might be: Things we can't live without: be able to reconstruct any past release at any time be able to maintain multiple supported major versions of the product at any time Things we would like to have: be able to perform ongoing major-feature development (for the next major release) without worrying about branch merges be able to perform maintenance updates to past releases Things we can live without: automated backporting of changes from current work to past releases never interrupt major feature development even for a few days or a week at at time If the above were your goals, you could then adopt a process like this: Do all development work on the trunk of your VCS ("master" in git) When you are close to a major release, halt major feature development, and focus on system stability for a week or so When the trunk seems stable, create a branch for this major release Major feature development can now proceed on the trunk, while only bug fixes and release preparation are allowed on the branch However, all bug fixes to be made to the branch must first be tested on the trunk; this ensures that they will also be present in all future releases Create a (VCS) tag on the branch when you are ready to release; this tag can be used to recreate the release at any time, even after further work on the same branch Further maintenance releases to this major release (minor releases) can now be prepared on the branch; each will be tagged before release In the mean time, major feature development geared toward the next major release can continue on the trunk When you get close to that release, repeat the above steps, creating a new releases branch for that release . This allows you to have multiple major releases, each on their own branch, in supported status at the same time, with the ability to release separate minor releases against each. This process won't answer all of your questions -- in particular, you will need a process in place to decide what fixes can be made to a release branch, and to ensure that bugs are not fixed on a release branch first (such fixes should always be tested on the trunk where possible). But it will give you a framework in which to make such decisions. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/129543",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/2471/"
]
} |
129,570 | Today I have held my first interview with potential interns.
While this has been mostly open questions, I have had some trivial programming tasks for them: Write a function that returns true if triangle sides (all integers) a, b and c can represent a right triangle . FizzBuzz. Calculate the Nth element of Fibonacci using recursion (if they didn't know what Fibonacci was, I would even write them the definition F(n) = F(n-1) + F(n-2); F(1) = 1; F(0) = 1). Implement structure List for integer and write function to reverse it. These are obviously very easy tasks and I was not prepared for someone not to solve them. How should I act when they struggle with these questions? Should I give up the answer? Give tip by tip (I did that and ended up solving the problem myself)? Or just move on (or maybe just stop) with the interview? ps. By having problems with questions, I don't mean like having a bug, I mean if they can't even get started. This was a case with Fibonacci and List questions. | My goal for any job interview, no matter which side I'm on, is to end up feeling like I'm talking to a colleague. Colleagues come into my office all the time when they're stuck on a problem. I ask my colleagues for help when I get stuck myself. So in an interview, I try to recreate that dynamic. In other words, what would you say if a colleague needed to implement a fibonacci sequence and didn't know what it was? You would explain it to them until they grasp it enough to continue on their own. There's no shame in ignorance as long as it's not permanent. If you go through that exercise and still can't picture yourself working with that person, then they're not a good fit for the job. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/129570",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/38658/"
]
} |
129,603 | I work on a new venture at a large enterprise software company (3000+ programmers). In my group, we have a bunch of projects and people usually work on several projects over the course of a year. I just started work on a project that has been previously maintained by a buddy of mine (consultant who's been with us for 3+ years) to add some features. I got into the code, and the quality was really quite poor. Whether it was on the UI frontend or the services backend, the code simply wasn't indented, there were hundreds of lines commented out for no apparent reason, documentation was basically non-existent, coding standards weren't applied consistently (e.g. mixing camelCase and under_scored_variables ), variable names were unintelligible, datatype choices were wrong, etc. etc. I'm very much a non-confrontational person, so I don't want to attack my coworker, but I also don't just wanto to go to my boss and complain about his performance. What are the kinds of things I could say to politely mention that the code is poorly structured? EDIT: I want to clarify that while I understand there is an element of "Everyone else's code sucks" to all programmers, when I see something like this (names are chosen on purpose and some details left out/changed in this example): public void doCalculate(Object argument) {
if (argument instanceof String) {
String argument2 = (String) argument;
if (argument2 == "DataBase") {
// do something
} else {
long argument3;
try {
argument3 = Long.parseLong(argument2);
} catch (Exception e) {
argument3 = -1;
}
// do something completely unrelated
}
}
} I think it's objectively fair to say that this is not a good idea. Furthermore, I'm not dealing with a newbie here (I'm only coding 3 years now). He's got maybe 20 years of experience on me. The advice you guys have given so far is great; just wanna make sure that we're not talking about a "fine line" here. | Ask him to explain his code to you Tell him you've never seen X programmed that way before, and ask him why he codes it that way. Show him the way you code it, and tell why you do it that way (best practices, better performance, less chance of errors, easier for other programmers to read/maintain, etc). Be sure to prepare all your arguments in advance, and focus on why your method is best instead of why his method is worst. Afterwards, see if he still supports his method over yours. If he is open to improvement, he will likely change his way of coding. If he still prefers to use his style of coding over yours, you are not likely to change his opinion. This is the exact same answer I gave for the question How to tell your boss that his programming style is really bad? . I originally voted to close this question as a duplicate, however enough people thought it was different enough to warrent re-opening it. My answer is still the same though, regardless of if you're talking to a boss or a co-worker. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/129603",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/25393/"
]
} |
129,604 | Reading this site and SO I've seen many stories of interview questions and answers saying a candidate had to implement a linked list from scratch. Usually this is a "gimme" exercise for programming role candidates like writing FizzBuzz. The idea is that if the candidate can't do this, they can't program and should be rejected almost immediately. However, I can't help but think this could be a poor practice for the following reasons: Modern higher level languages like C# and Python natively use lists extensively; writing your own linked list object would be only required under unusual circumstances and even then probably ill-advised. Lower level languages like C++ have standard libraries with iterators/list containers and objects. In light of the first two points, coders can go years without even thinking about implementing a list (linked, doubly-linked, etc) themselves. Some may not even really see such things since college days. Computing power also isn't the factor it was years ago, so efficiency via pointers isn't the issue it used to be (in general). A simple web search of something like "linked list example" would bring up plenty of code examples that could just be memorized and spat back out, not really indicating the true competence of the applicant. I should say that using a linked list to lead to open-ended questions/discussions of candidates' problem solving/critical thinking abilities is mostly likely a really good interview practice. Any way an interviewer can really see what an applicant is like and how they think is massively beneficial. I think this binary approach of "no linked list code, no job" for programmers working on a desktop or web application is a bit outdated. It could also be quite harmful; a candidate who can't remember how to properly work with the head of a list could be an otherwise excellent coder and co-worker and be lost in the mix. Thoughts? EDIT : There are many (good) comments suggesting that whether this is a good or bad question to ask depends on the context of the job. I strongly agree, so let me rephrase this question: Implementing a linked-list is a common interview question for a wide range of coding jobs, similar to questions like FizzBuzz or writing a recursive function for calculating factorials. Does this question have enough utility to be used commonly for evaluating programming candidates across the board? Or should considered a bad question to ask except for "Senior Developer, Embedded Linked Lists Team" positions? | If answering the question tells you what you want to know about a candidate, then it's a good interview question. If it doesn't tell you that, it's a bad question. Easy questions like FizzBuzz do serve a specific purpose. If a candidate can't code FizzBuzz, they simply can't code and you can end the interview early. I'd rate implementing a linked list only slightly harder, but it can start a conversation about data structures in general that will reveal a lot. Just remember that no single interview question will tell you everything you want to know. You really need to have a group of questions ready. You should ask questions in a sequence from easiest to hardest so you can find the limit of what the candidate knows. If you ask one question and they nail it, you still don't know what else they do or do not know. Regarding your edit: Does this question have enough utility to be used commonly for evaluating programming candidates across the board? Or should considered a bad question to ask except for "Senior Developer, Embedded Linked Lists Team" positions? I think it is a good general purpose question that could be used for evaluating practically any programming candidate. It just needs to be part of a larger group of questions. It would be a good ice breaker for many types of position (even if the candidate can't implement a linked list from scratch, maybe they can explain how they've used one before and what the key functions are), or the beginning of a long sequence of more advanced questions for the "Senior Developer, Embedded Linked Lists Team" position. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/129604",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/36853/"
]
} |
129,714 | I'm new to github, and am looking for advice on how to manage issues. I'm used to having priority and other ordering options but see that none exist. How do others manages issues during the lifecycle of a bug/feature? Thanks in advance. | You could define different groups of labels like issue types , issue priorities , issue statuses , version tags , and maybe more. In order to be able to see instantly to which group a label belongs to you could use a naming convention like <label-group>:<label-name> . Using such a naming convention should make managing Github issues much easier and helps others to "understand" issues much faster. Note that you can also assign colors to labels which can add even more to readability (I would use a specific color for each label group). But because you still have to assign/unassign those labels to/from issues manually you might want to keep the overall list of groups/labels small. According to the scheme suggested above you might define groups and corresponding labels as follows. 'issue type' group type:bug type:feature type:idea type:invalid type:support type:task 'issue priority' group prio:low prio:normal prio:high 'issue status' group (These labels describe an issue's state in a defined workflow.) status:confirmed status:deferred status:fix-committed status:in-progress status:incomplete status:rejected status:resolved 'issue information' group info:feedback-needed info:help-needed info:progress-25 info:progress-50 info:progress-75 'version tag' group ver:1.x ver:1.1 | {
"source": [
"https://softwareengineering.stackexchange.com/questions/129714",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/45054/"
]
} |
129,806 | I barely passed my Java programming exam today. I had to answer some general questions about threading which I did well and to write a little threaded program which was worse. I had to connect my laptop to the projector screen and write the program right away. My first attempt was to use anonymous classes but I forgot the exact syntax. Maybe because of some excitement or maybe because last two weeks I was coding mostly in php. Then I asked is it allowed to use the API documentation. The answer was "NO". So I decided to go another way around and I implemented Runnable. The program was doing what was requested at the end. Of course the examiners noticed my first fail and that affected my score greatly. I was amazed that it was not allowed to use the API documentation. So, my question is: Is it really important to be able to code flawlessly without API documentation? Should I develop this skill? Is it really important in the real world and in the work environment? While on programming courses I focused on learning patterns, developing skills to write good design applications, skills to use API and find the necessary information fast. I was not trying to learn how to program without API documentation. Is it must-have during job interviews (coding without API documentation)? | In Real Life™, I would rate this skill as a "nice to have", but not at all required. In a university setting it is different, however. An ability to code without documentation can be used as an indirect indication of student's familiarity with the subject. In a sense, seeing how you solve a problem without looking at the documentation tells the professor that you have practiced using the API before -- by doing your homework and other assignments, or perhaps even by programming for fun on your own. A smart person with a cursory understanding of the API in question should be able to figure out almost any Java API on her own by looking at the documentation. This is not a coincidence: programmers are expected to learn on the job, and the API documentation for popular programming systems, including Java, is structured to help them learn quickly. To helps programmers complete their tasks without requiring prior familiarity with the API, the documentation often supplies short, self-contained examples, which illustrate the concept in a brief and concise way. This covers a wide range of tasks, including most problems that you may see at an examination. Unfortunately, this works directly against your professor's goal of measuring your knowledge of the subject, as opposed to measuring how smart you are. Hence, in a university settings it is not unreasonable to ask you to code without looking at the documentation. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/129806",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/21258/"
]
} |
129,859 | I'm looking to get a job as a Python programmer. I know the basics of the language and have created a few games with it using pygame . I've also started to experiment with Django . However, looking at the job market, it doesn't seem very many Python jobs are web-related. On the desktop side of things, it doesn't seem like very many companies use the popular GUI libraries like pyQt or wxPython . How are companies actually using Python? What areas should one focus on to land a job as a Python programmer? | The thing about interpreted languages is companies that don't want to give their source code away don't use it in delivered software, so almost all the jobs you will see are web related. You might have better luck searching for specific frameworks like Django. If there's an open source project written in python you like, you might apply to a company that sponsors it. It usually won't make it into the job description, but it's almost an underground among programmers who use languages like C++ to use python when they have a choice, for one-off utilities, in-house applications, or things like automated test scripts that aren't shipped with their official product. Some high-end software like Maya uses python for scripting, so that might be another route to pursue. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/129859",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/17779/"
]
} |
129,890 | From my understanding SVN is 'Easy to branch. Difficult to merge'. Why is that? Is there a difference how they merge? | Please see my Stack Overflow answer for a very concrete situation where Mercurial (and Git) merges without problems and where Subversion presents you with a bogus conflict. The situation is a simple refactoring done on a branch where you rename some files. With regard to tdammers answer, then there is a number of misunderstandings there: Subversion, Mercurial, and Git all track repository-wide snapshots of the project. Calling them versions , revisions , or changesets makes no difference. They are all logically atomic snapshots of a set of files. The size of your commits makes no difference when it comes to merging. All three systems merge with the standard three-way merge algorithm and the inputs to that algorithm are greatest common ancestor version version on one branch version on other branch It doesn't matter how the two branch versions were created. You can have used 1000 small commits since the ancestor version, or you can have used 1 commit. All that matters is the final version of the files. (Yes, this is surprising! Yes, lots of DVCS guides get this horribly wrong.) He also raises some good points about the differences: Subversion has some "voodoo" where you can merge from /trunk into, say, /branches/foo . Mercurial and Git does not use this model — branches are instead modeled directly in the history. The history therefore becomes a directed acyclic graph instead of being linear. This a much simpler model than the one used by Subversion and this cuts away a number of corner cases. You can easily delay a merge or even let someone else handle it. If hg merge gives you a ton of conflicts, then you can ask your coworker to hg pull from you and then he has the exact same state. So he can hg merge and maybe he's better at resolving conflicts than you are. This is very difficult with Subversion where you're required to update before you can commit. You cannot just ignore the changes on the server and keep committing on your own anonymous branch. In general, Subversion forces you to play around with a dirty working copy when you svn update . This is kind of risky since you haven't stored your changes anywhere safe. Git and Mercurial lets you commit first, and then update and merge as necessary. The real reason Git and Mercurial are better at merging than Subversion is a matter of implementation. There are rename conflicts that Subversion simply cannot handle even thought it's clear what the correct answer is. Mercurial and Git handles those easily. But there's no reason why Subversion couldn't handle those as well — being centralized is certainly not the reason. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/129890",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/-1/"
]
} |
129,950 | I have a strange situation at work, where a colleague of mine often asks me and other co-workers for working code. I would like to help him, but this constant request of trivial snippets interrupts my thoughts and sometimes makes it hard to concentrate. Plus, I have the impression (...) that this requests are generated by lack of competence, more than by laziness. In fact, he often asks things pretending to know the answer, since when I solve the problem he usually says things like "Sure", "Yes, that's what I thought", giving me the impression that my answer isn't worth it. How can I solve this embarrassing situation? Should I show more explicitly in front of other colleagues his lack of knowledge (by saying things like: "do it yourself if you can, please") or continue giving him what he wants? I think that he should aggregate all his questions in one, so that I can give him a portion of my time and he can work all by himself on his things. There is no hierarchy in the team, I must say we both have a similar seniority of five years, more or less. For the same reason I believe I cannot report to management, since trivial questions are often ignored. I discussed with other two members and they agree with me: in fact he often ask things cycling through colleagues. | My response would be to say "I'm a little busy right now, can you email me and I'll deal with it later". Chances are some of his questions are legitimate, by forcing him to email you it doesn't interrupt your flow and he is unlikely to bother detailing the problem in an email if its trivial. You then also have a record to show to management if his questions still stay at an unreasonable level. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/129950",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/1939/"
]
} |
129,961 | An awkward, open question, but it's a problem I'm always bumping against: Software that's easy to maintain and work with is software designed well. Trying to make a design intuitive means naming your components in such a way that the next developer should be able to infer the function of the component. This is why we don't name our classes "Type1", "Type2", etc. When you're modelling a real-world concept (e.g. customer) this is generally as simple as naming your type after the real-world concept being modelled. But when you're building abstract things, which are more system-oriented, it's very easy to run out of names which are both easy to read and simple to digest. It gets worse (for me) when trying to name families of types, with a base-type or interface which describes what the components have to do, but not how they work. This naturally leads to each derived type trying to describe the flavour of the implementation (e.g. IDataConnection and SqlConnection in the .NET Framework), but how do you express something complicated like "works by reflection and looking for a specific set of attributes"? Then, when you've finally picked a name for the type you think describes what it's trying to do, your colleague asks "WTF does this DomainSecurityMetadataProvider actually do? " Are there any good techniques for choosing a well-meaning name for a component, or how to build a family of components without getting muddled names? Are there any simple tests that I can apply to a name to get a better feel for whether the name is "good", and should be more intuitive to others? | For naming, there are six techniques that were proven to work for me: spend a lot of time on inventing names use code reviews don't hesitate to rename spend a lot of time on inventing names use code reviews don't hesitate to rename PS. In case if your API is going to be public, above applies before that - because, you know, "Public APIs, like diamonds, are forever. You have one chance to get it right so give it your best..." (Joshua Bloch, How to Design a Good API and Why it Matters ) | {
"source": [
"https://softwareengineering.stackexchange.com/questions/129961",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/8616/"
]
} |
130,250 | We're launching a system, and we sometimes get the famous exception NullReferenceException with the message Object reference not set to an instance of an object . However, in a method where we have almost 20 objects, having a log which says an object is null, is really of no use at all. It's like telling you, when you are the security agent of a seminar, that a man among 100 attendees is a terrorist. That's really of no use to you at all. You should get more information, if you want to detect which man is the threatening man. Likewise, if we want to remove the bug, we do need to know which object is null. Now, something has obsessed my mind for several months, and that is: Why doesn't .NET give us the name, or at least the type of the object reference, which is null? . Can't it understand the type from reflection or any other source? Also, what are the best practices to understand which object is null? Should we always test nullability of objects in these contexts manually and log the result? Is there a better way? Update: The exception The system cannot find the file specified has the same nature. You can't find which file, until you attach to the process and debug. I guess these types of exceptions can become more intelligent. Wouldn't it be better if .NET could tell us c:\temp.txt doesn't exist. instead of that general message? As a developer, I vote yes. | The NullReferenceException basically tells you: you are doing it wrong. Nothing more, nothing less. It's not a full-fledged debugging tool, on the contrary. In this case I'd say you're doing it wrong both because there is a NullReferenceException you didn't prevent it in a way you know why/where it happened and also maybe: a method requiring 20 objects seems a bit off I'm a big fan of checking everything before things start going wrong, and providing good information to the developer. In short: write checks using ArgumentNullException and the likes and write the name yourself. Here's a sample: void Method(string a, SomeObject b)
{
if (a == null) throw ArgumentNullException("a");
if (b == null) throw ArgumentNullException("b");
// See how nice this is, and what peace of mind this provides? As long as
// nothing modifies a or b you can use them here and be 100% sure they're not
// null. Should they be when entering the method, at least you know which one
// is null.
var c = FetchSomeObject();
if(c == null)
{
throw InvalidOperationException("Fetching B failed!!");
}
// etc.
} You could also look into Code Contracts , it has it quirks but it works pretty well and saves you some typing. update in .Net 6 there's ThrowIfNull which removes the need for stating the parameter name. And since C# version 6, there's the nameof operator removing the rather error-prone need to repeat the argument name in a hardcoded string. void Method(string a, SomeObject b)
{
//.Net 6
ArgumentNullException.ThrowIfNull(a);
//C# 6
if (b == null) {
throw new ArgumentNullException(nameof(b));
}
} As to why this is not the default: we can only guess (or try to dig up notes somehow from people working on C# and/or .Net). My guess: sometimes it's not needed or wanted sometimes the message should have more than just the parameter name there are alternatives (stacktrace, adding it manually) it gives developers a choice adding this everywhere by default isn't cheap because it means nearly all argument and variable names must be added as strings in the executable | {
"source": [
"https://softwareengineering.stackexchange.com/questions/130250",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/31418/"
]
} |
130,415 | Dart has been out for ages (in internet years), but judging by Google Trends , it hasn't gotten much hype, and the fact that it only works in Chrome doesn't help either. Nonetheless, Chrome is gaining market share every day, which lends itself to a better view on Dart. What is the big picture now? What state is the language in? Do people regard highly of it? Market share? Web App showcase? Some feature implementations that make you go "I have to use it"? | The short answer to "What's the state of Dart?" is: it's in Technology Preview. That's a special way of saying, "we launched early so we can open source everything and work in the open." "Technology preview" also means "we're not even in Alpha yet, we have a lot of work to do, but there's enough there for you to play with and give feedback." Internet time may work for news stories or consumer product iterations, but probably not for something as ambitious and broad as the Dart effort. Remember, Dart is more than just a language. It's also a set of libraries, a better DOM interface, a virtual machine, an Editor, and integration with Chrome. The team is working very hard on a lot of parallel threads, but I personally expect it'll be six months before we have most of the pieces in place. It's not true that Dart only works in Chrome. Dart compiles to JavaScript and targets modern browsers. Sure, Chrome will be the first to launch with native Dart support, but ensuring Dart compiles to performant and effective JavaScript is a core constraint and feature of the project. The big picture is that Dart will become a "batteries includes" development environment for modern web apps. Dart's driving goal is to help ensure the web remains a productive and enjoyable platform for app development and deployment. This means a lot of pieces need to fall into place: language, libraries, editors, virtual machines, and browser integration. Put all together, we believe Dart will be a compelling option for modern web app developers. The big big BIG picture is that we want to bring app developers to the web, and we want web developers to write more complex web apps. If they use Dart, that's great. But at the end of the day, the language doesn't matter. The only thing that matters is that complex, client side, high fidelity, low latency, beautiful modern web apps are being built. The language is in a state of development. We see new releases to the spec approximately once per month. Major features are missing, such as reflection, but we keep iterating. We just added map() support to Collection, for example. Gilad Bracha, a guy who knows his languages (having created NewSpeak and worked on the Java Lang Spec) and Josh Bloch, a guy who knows his libraries (having written Effective Java and worked on the Java Collection libraries) are working on the language and libraries, along with the greater team. Do people regard highly of Dart is hard to generalize, and it probably doesn't matter too much to you. You should draw your own conclusions after having played with Dart. My experience is that app developers from other platforms such as Java, C#, or Flex find Dart attractive and familiar. My experience with JavaScript developers is split. If that JavaScript developer has also built apps on other platforms, they are cautiously optimistic about Dart (or, at least, the solution it's trying to provide). If that JavaScript developer grew up on JavaScript and has only programmed in JavaScript, there's more hesitation. This could be some fundamental concern about the language, or hesitation in leaving a comfort zone, or simply just not running into edge cases with JavaScript. This is just generalization, but I've seen plenty of people become productive in Dart extremely quickly. As for market share, it's extremely early in the game. It's probably not the right question to ask, as Dart isn't even shipping. A more interesting question would be, "What is the market share of apps on the web?" and then go figure out how we can address that. As for a Web App showcase, the Dart team built Swarm, a slick newsreader. Unfortunately, we only have it in source code right now: http://www.dartlang.org/samples/index.html As for some "killer" features, I would say there are a few pretty interesting ones: optional types are slick, they add annotations and documentations for humans and machines. Isolates is a great way to achieve concurrency in a safe manner. Libraries (modularity) is sorely needed for the web stack, and Dart has libraries and classes. Snapshots will allow for extremely fast start-up Bundled libraries (like collections, Stopwatch, etc) will unify code bases and shrink shippable code Nice new DOM interface, which makes working with DOM much more enjoyable. It feels like native Dart code. I hope I've answered your questions. I think the only question that matters is, "Does my language help me build complex, high fidelity, low latency, modular, modern web apps?" The end state of all of this is simply helping more app developers deliver more successful apps to the modern web. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/130415",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/41507/"
]
} |
130,416 | I develop personal projects on two machines without use of a shared server or a network connection between the two. Do any common version control systems reliably support use of portable storage (such as a USB flash device) as the shared repository? | Use a DVCS such as Git or Mercurial . Distributed version control systems do not have a shared central server. With a DVCS, every copy of a repository holds the complete history - everything. This means, that when used on a USB key any changes you make are make to the repository on the USB key and when moved between computers will hold this history. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/130416",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/3912/"
]
} |
130,556 | I know this may be a general question, but what exactly goes into scaling for all the users you will encounter, even if not in the next few months? I did some research and most of what is done is server side, with caches and other similar stuff. Any points in the right direction would be helpful. | "Even if not in the next few months" smells like premature optimization. Don't do that: It's not worth it. Many people, when realizing a project, believe that their project will be the second Facebook. Then they release it, and then they notice that ten visitors per month is the best they can do. Spending less money and time on optimizing for further scalability and more time thinking about the project itself would help. It's like with bottlenecks and profiling: you always have an impression that you know perfectly well where the bottleneck is, and in most cases, you discover that you were wrong when profiling. Do your application first. See how is it used. Profile it. Gather BI data. Gather performance metrics. Analyse it. Check if your analysis was right. Make the predictions about the future, based on your analysis, then optimize for scalability what you really need to optimize. For example, a scalable web application must be able to be hosted on several servers. The last project I've done for my customer was intended to be scalable. There was a choice: either we spend 1.5 months more in order to make the web app work on several servers, or the customer buys a high quality server (one machine only) and the web app is hosted on this machine only. It was much less expensive to buy an expensive server, counting both the direct cost (price of the server vs. price of 1.5 months of work) and the long term savings (power consumption of one high quality server vs. power consumption of several low-end servers). Now the app is running for a few months, and according to the metrics, if there would be a problem with scalability one day, it would concern in first place the database, and in second place the network infrastructure (switches, routers, etc.). Now, an application may be more or less scalable on several points: Database: according to my personal experience, most scalability-related problems come from the database. Hopefully, there are plenty of ways to make the database scalable for every industry-grade database engine, and even before that, there are plenty of ways to improve the database structure and to optimize the queries . Caching helps too. Network: bandwidth can be a real issue in some configurations, and sometimes you can't do anything about it without doing expenses you can't afford. To avoid being blocked on this level, you can optimize the visual design of the website in order to have less images or better compressed images , optimize the layout of the page, reduce the HTTP requests (through CSS sprites ), reduce the quantity of HTML code sent (through AJAX ), etc. HTTP compression is a must-have. Browser caching too, but your metrics may show that many clients have an empty cache. CPU and memory usage: porting an application to be hosted by several servers can be painful too, both on infrastructure (hardware) level and on application (software) level. To avoid this, use extensive caching and profile the application, removing progressively the bottlenecks . | {
"source": [
"https://softwareengineering.stackexchange.com/questions/130556",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/43277/"
]
} |
130,571 | Why was the property string foo = string.Empty included in the BCL? It seems more verbose and no clearer than just using an empty string ( string foo = "" ) | I can only assume here: string.Empty has been defined for explicitness - when initializing a string, it may not be clear from context that "" was indeed explicitly meant as an initializer (instead of null or say " " or just as a place holder during testing). Using string.Empty is a definite answer to that sort of conundrum. It may also be a throwback to C - an empty string in C is not an empty string. It is a character array whose first character is null (hence, empty), which is not the same as C#. My point here being that in different languages you would represent an empty string in different ways (and they may have different meanings) - having a string.Empty precludes such ambiguity. As opposed to what others say about multiple objects - this is not a problem as any string literal will get interned on compilation. This includes the value of string.Empty - "" . Any time either of these are repeated in code, the object will be retrieved from the intern pool. This is true per app domain . | {
"source": [
"https://softwareengineering.stackexchange.com/questions/130571",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/29270/"
]
} |
130,601 | I just noticed that every modern OO programming language that I am at least somewhat familiar with (which is basically just Java, C# and D) allows covariant arrays. That is, a string array is an object array: Object[] arr = new String[2]; // Java, C# and D allow this Covariant arrays are a hole in the static type system. They make type errors possible that cannot be detected at compile-time, so every write to an array must be checked at runtime: arr[0] = "hello"; // ok
arr[1] = new Object(); // ArrayStoreException This seems like a terrible performance hit if I do lots of array stores. C++ does not have covariant arrays, so there is no need to do such a runtime check, which means there is no performance penalty. Is there any analysis done to reduce the number of runtime checks necessary? For example, if I say: arr[1] = arr[0]; one could argue that the store cannot possibly fail. I'm sure there are lots of other possible optimizations I haven't thought of. Do modern compilers actually do these kinds of optimizations, or do I have to live with the fact that, for example, a Quicksort always does O(n log n) unnecessary runtime checks? Can modern OO languages avoid the overhead created by supporting co-variant arrays? | D doesn't have covariant arrays. It allowed them prior to the most recent release ( dmd 2.057 ), but that bug has been fixed. An array in D is effectively just a struct with a pointer and a length: struct A(T)
{
T* ptr;
size_t length;
} Bounds checking is done normally when indexing an array, but it's removed when you compile with -release . So, in release mode, there's no real performance difference between arrays in C/C++ and those in D. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/130601",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/3684/"
]
} |
130,645 | I am leading a team of 3-4 junior developers. My job-- besides writing code-- is to provide supervision and guidance for the juniors. But, I fully understand how much developers cherish autonomy in their work, and I don't want to destroy their intrinsic motivation by spoon-feeding them with my thoughts and my algorithms; I want them to explore the problem in their own ways, and think about it themselves and only come to me when they are really facing insurmountable issues. When they do come to me, sometimes I would have to propose a completely different algorithm to solve the problem because their algorithm isn't robust enough ( remember, I am the senior and I have seen more than them). Of course I would explain this in a nice manner so as not to hurt their feelings, and I would gently outline how my solution is vastly superior than theirs, no condescending tone or condemning words. But still, they are sometimes reluctant to accept my suggestion, partly because they have invested so much in their own algorithm, or partly because of the fear that using a new method would entail more learning time and make them appear to the management as if they are going nowhere. But deep in my heart I know very well that my algorithm is much better than theirs and they should just adopt it. What should I do if they didn't adopt my suggestion? Should I just ask them to follow my way, or should I just let them have their heads banged on the wall many many more times and wait for them to come back to me? Doing the former makes me into a dictator, but doing the later would cost us precious development time and incur bug fixing cost. I am really in a dilemma here. | Help them to understand why they should make your suggested change. And listen to them if they have a good reason not to make the change. Have a discussion, and come to an agreement on the basis of what is the best thing to do. This approach is important for the following reasons: You want them to be making the change because of solid business/technical reasons. It's important to be clear on what these are (any remember that you could also be wrong, so be humble....). You really want to convey the reasoning behind your suggestion - that way the recipient will learn to solve similar problems themselves in the future. You'll also have a better relationship if your juniors feel that they are learning some good insights from you. You won't be respected if you use your seniority and can't demonstrate that you actually have good reasons. Your boss would presumably like to be confident that you are using your juniors' time effectively on things that create real value, not just "doing it my way" for the sake of it. If you are smart, you can also get them to come to the answer just by asking questions . Done right, your junior will come to the correct conclusion themselves (and therefore be much more willing to implement it). Example questions: Your code assumes access to the production database. How could we change that so that it will still work and can be correctly tested by JUnit in a disconnected development environment? (potential answer: ah! we should use dependency injection....) What will happen if an attacker deliberately sent some cleverly constructed SQL in your online data entry form? (potential answer: ah! perhaps we shouldn't construct SQL statements by concatenating unverified text from the internets) EDIT : If you succeed in persuading your junior that the right thing to do is to follow your suggestion, but they are still reluctant than here is some additional advice: Explore why they are reluctant. It is possible that they need to come to a personal realisation that it doesn't matter if you throw code out, provided you get the result. Or it could be that they feel under time pressure because of some deadline. You need to know, otherwise you can't help them..... You can make the point that they can treat the change as a way to improve their refactoring skills. Once refactoring skills are good enough, you should be able to re-purpose even a fairly large and complex code base relatively quickly. You should emphasise that everything will be in source control, so they can always revert back to an old version if needed. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/130645",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/468/"
]
} |
130,679 | We all have definitely used typedef s and #define s one time or the other. Today while working with them, I started pondering on a thing. Consider the below 2 situations to use int data type with another name: typedef int MYINTEGER and #define MYINTEGER int Like above situation, we can, in many situations, very well accomplish a thing using #define, and also do the same using typedef, although the ways in which we do the same may be quite different. #define can also perform MACRO actions which a typedef cannot. Although the basic reason for using them is the different, how different is their working? When should one be preferred over the other when both can be used? Also, is one guaranteed to be faster than the other in which situations? (e.g. #define is preprocessor directive, so everything is done way earlier than at compiling or runtime). | A typedef is generally preferred unless there's some odd reason that you specifically need a macro. macros do textual substitution, which can do considerable violence to the semantics of the code. For example, given: #define MYINTEGER int you could legally write: short MYINTEGER x = 42; because short MYINTEGER expands to short int . On the other hand, with a typedef: typedef int MYINTEGER: the name MYINTEGER is another name for the type int , not a textual substitution for the keyword "int". Things get even worse with more complicated types. For example, given this: typedef char *char_ptr;
char_ptr a, b;
#define CHAR_PTR char*
CHAR_PTR c, d; a , b , and c are all pointers, but d is a char , because the last line expands to: char* c, d; which is equivalent to char *c;
char d; (Typedefs for pointer types are usually not a good idea, but this illustrates the point.) Another odd case: #define DWORD long
DWORD double x; /* Huh? */ | {
"source": [
"https://softwareengineering.stackexchange.com/questions/130679",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/36966/"
]
} |
130,722 | I'm a little bit confused about 'function' and 'lambda'. I've seen some examples showing that the scheme keyword lambda works very similarly to the JavaScript keyword function , but I really don't know how they are related. I'm told that 'function' and 'method' can be used interchangeably when speaking about objects in .net. I'm wondering if 'lambda' and 'function' similarly mean the same thing. I wonder if 'lambda' has some esoteric meaning, seeing that the Greek letter lambda (λ) appears in so many avatars on this site. To make things even more confusing, in .net, the functional parts of C# refer to function expressions passed to another function as 'lambda expressions', so the word really seems to be all over the place. I'm also vaguely familiar with the term 'lambda calculus'. What is the difference between a function and a lambda? | The word "lambda" or "lambda expressions" most often refers to anonymous functions. So in that sense a lambda is a kind of function, but not every function is a lambda (i.e. named functions aren't usually referred to as lambdas). Depending on the language, anonymous functions are often implemented differently than named functions (particularly in languages where anonymous functions are closures and named functions are not), so referring to them with different terms can make sense. The difference between scheme's lambda keyword and Javascript's function keyword is that the latter can be used to create both anonymous functions and named functions while the former only creates anonymous functions (and you'd use define to create named functions). The lambda calculus is a minimal programming language/mathematical model of computation, which uses functions as its only "data structure". In the lamdba calculus the lambda-symbol is used to create (anonymous) functions. This is where the usage of the term "lambda" in other languages comes from. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/130722",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/25262/"
]
} |
130,858 | Software developers don't typically use date as version number, though YYYYMMDD format (or its variances) looks solid enough to use. Is there anything wrong with that scheme? Or does it apply to limited 'types' of software only (like in-house productions)? | The problem with using a date, is that specifications are written against the counting numbers rather than a date when it's due. "This piece of functionality is due to be in release 1. The other piece of functionality is due to be in release 2." You can't refer to a date in specs, since the release date could get missed. If you don't have such a formal process that needs different releases identified in advance, using dates is fine; you don't need to add another number into the mix. Version numbers are unlikely to contain dates, since their context is linked to specs. Build numbers are likely to contain dates, since their context is linked to when the build took place. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/130858",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/5176/"
]
} |
130,925 | So today I had a talk with my teammate about unit testing. The whole thing started when he asked me "hey, where are the tests for that class, I see only one?". The whole class was a manager (or a service if you prefer to call it like that) and almost all the methods were simply delegating stuff to a DAO so it was similar to: SomeClass getSomething(parameters) {
return myDao.findSomethingBySomething(parameters);
} A kind of boilerplate with no logic (or at least I do not consider such simple delegation as logic) but a useful boilerplate in most cases (layer separation etc.). And we had a rather lengthy discussion whether or not I should unit test it (I think that it is worth mentioning that I did fully unit test the DAO). His main arguments being that it was not TDD (obviously) and that someone might want to see the test to check what this method does (I do not know how it could be more obvious) or that in the future someone might want to change the implementation and add new (or more like "any") logic to it (in which case I guess someone should simply test that logic ). This made me think, though. Should we strive for the highest test coverage %? Or is it simply an art for art's sake then? I simply do not see any reason behind testing things like: getters and setters (unless they actually have some logic in them) "boilerplate" code Obviously a test for such a method (with mocks) would take me less than a minute but I guess that is still time wasted and a millisecond longer for every CI. Are there any rational/not "flammable" reasons to why one should test every single (or as many as he can) line of code? | I go by Kent Beck's rule of thumb: Test everything that could possibly break. Of course, that is subjective to some extent. To me, trivial getters/setters and one-liners like yours above usually aren't worth it. But then again, I spend most of my time writing unit tests for legacy code, only dreaming about a nice greenfield TDD project... On such projects, the rules are different. With legacy code, the main aim is to cover as much ground with as little effort as possible, so unit tests tend to be higher level and more complex, more like integration tests if one is pedantic about terminology. And when you are struggling to get overall code coverage up from 0%, or just managed to bump it over 25%, unit testing getters and setters is the least of your worries. OTOH in a greenfield TDD project, it may be more matter-of-fact to write tests even for such methods. Especially as you have already written the test before you get the chance of starting to wonder "is this one line worth a dedicated test?". And at least these tests are trivial to write and fast to run, so it's not a big deal either way. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/130925",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/1447/"
]
} |
131,006 | It seems reasonable to me that if a serious bug is found in production by end-users, a failing unit test should be added to cover that bug, thus intentionally breaking the build until the bug is fixed. My rationale for this is that the build should have been failing all along , but wasn't due to inadequate automated test coverage. Several of my colleagues have disagreed saying that a failing unit test shouldn't be checked in. I agree with this viewpoint in terms of normal TDD practices, but I think that production bugs should be handled differently - after all why would you want to allow a build to succeed with known defects? Does anyone else have proven strategies for handling this scenario? I understand intentionally breaking the build could be disruptive to other team members, but that entirely depends on how you're using branches. | Our strategy is: Check in a failing test, but annotate it with @Ignore("fails because of Bug #1234") . That way, the test is there, but the build does not break. Of course you note the ignored test in the bug db, so the @Ignore is removed once the test is fixed. This also serves as an easy check for the bug fix. The point of breaking the build on failing tests is not to somehow put the team under pressure - it's to alert them to a problem. Once the problem is identified and filed in the bug DB, there's no point in having the test run for every build - you know that it will fail. Of course, the bug should still be fixed. But scheduling the fix is a business decision, and thus not really the dev's concern... To me, once a bug is filed in the bug DB, it's no longer my problem, until the customer/product owner tells me they want it fixed. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/131006",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/34614/"
]
} |
131,036 | What were the design decisions that argued in favour of void not being constructable and not being allowed as a generic type? After all it is just a special empty struct and would have avoided the total PITA of having distinct Func and Action delegates. (C++ allows explicit void returns and allows void as a template parameter) | The fundamental problem with "void" is that it does not mean the same thing as any other return type. "void" means "if this method returns then it returns no value at all." Not null; null is a value. It returns no value whatsoever. This really messes up the type system. A type system is essentially a system for making logical deductions about what operations are valid on particular values; a void returning method doesn't return a value, so the question "what operations are valid on this thing?" don't make any sense at all. There's no "thing" for there to be an operation on, valid or invalid. Moreover, this messes up the runtime something fierce. The .NET runtime is an implementation of the Virtual Execution System, which is specified as a stack machine. That is, a virtual machine where the operations are all characterized in terms of their effect on an evaluation stack. (Of course in practice the machine will be implemented on a machine with both stack and registers, but the virtual execution system assumes just a stack.) The effect of a call to a void method is fundamentally different than the effect of a call to a non-void method; a non-void method always puts something on the stack, which might need to be popped off. A void method never puts something on the stack. And therefore the compiler cannot treat void and non-void methods the same in the case where the method's returned value is ignored; if the method is void then there is no return value so there must be no pop. For all these reasons, "void" is not a type that can be instantiated; it has no values , that's its whole point. It's not convertible to object, and a void returning method can never, ever be treated polymorphically with a non-void-returning method because doing so corrupts the stack! Thus, void cannot be used as a type argument, which is a shame, as you note. It would be very convenient. With the benefit of hindsight, it would have been better for all concerned if instead of nothing whatsoever, a void-returning method automatically returned "Unit", a magical singleton reference type. You would then know that every method call puts something on the stack , you would know that every method call returns something that could be assigned to a variable of object type , and of course Unit could be used as a type argument , so there would be no need to have separate Action and Func delegate types. Sadly, that's not the world we're in. For some more thoughts in this vein see: https://docs.microsoft.com/en-us/archive/blogs/ericlippert/the-void-is-invariant https://docs.microsoft.com/en-us/archive/blogs/ericlippert/never-say-never-part-one https://docs.microsoft.com/en-us/archive/blogs/ericlippert/why-have-a-stack | {
"source": [
"https://softwareengineering.stackexchange.com/questions/131036",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/-1/"
]
} |
131,264 | I am considering basing some new software on a LGPL web application. I want to utilize this new software for creating one website for my employer, and we do not intend to sell or distribute the software itself to anybody. Does publishing web pages from LGPL software constitute "distributing" in the license, so I would have to publish our changes to the LGPL code as well? I understand that none of you are lawyers so IANAL is implied. I also understand that I could contact the developers of the LGPL software and ask for a different license. | There's a variant of the GPLv3 called the "Affero GPL v3". To quote gnu.org, The GNU Affero General Public License is a modified version of the
ordinary GNU GPL version 3. It has one added requirement: if you run
the program on a server and let other users communicate with it there,
your server must also allow them to download the source code
corresponding to the program that it's running. If what's running
there is your modified version of the program, the server's users must
get the source code as you modified it. It follows that "running a program on the server" is not distribution; the base GPLv3 already covered that. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/131264",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/5094/"
]
} |
131,377 | Over the course of my career, I've noticed that some developers don't use debugging tools, but do spot checking on erroneous code to figure out what the problem is. While many times being able to quickly find errors in code without a debugger is a good skill to have, it seems it's less productive to spend a lot of time looking for issues when a debugger would easily find little mistakes like typos. Is it possible to manage a complex without a debugger? Is it advisable? What benefits are there to be had by using " psychic debugging ?" | What looks like guessing from the outside often turns out to be what I call "debugging in your mind". In a way, this is similar to grandmasters' ability to play chess without looking at a chess board. It is by far the most efficient debugging technique I know, because it does not require a debugger at all. Your brain explores multiple code paths at the same time, yielding better turnaround than you could possibly get with a debugger. I was not conscious about this technique before briefly entering the world of competitive programming , where using a debugger meant losing precious seconds. After about a year of competing, I started using this technique almost exclusively as my initial line of defense, followed by debug logging, with using an actual debugger sitting at the distant third place. One useful side effect of this practice was that I started adding new bugs at a slower pace, because "debugging in my mind" did not stop as I wrote new code. Of course this method has its limitations, due mostly to the limitations of one's mind at visualizing multiple paths through the code. I learned to respect these limitations of my mind, turning to a debugger for fixing bugs in more advanced algorithms. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/131377",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/45857/"
]
} |
131,397 | The classical way to program is with try ... catch . When is it appropriate to use try without catch ? In Python the following appears legal and can make sense: try:
#do work
finally:
#do something unconditional However, the code didn't catch anything. Similarly one could think in Java it would be as follows: try {
//for example try to get a database connection
}
finally {
//closeConnection(connection)
} It looks good and suddenly I don't have to worry about exception types, etc. If this is good practice, when is it good practice? Alternatively, what are the reasons why this is not good practice or not legal? (I didn't compile the source. I'm asking about it as it could be a syntax error for Java. I checked that the Python surely compiles.) A related problem I've run into is this: I continue writing the function/method, at the end of which it must return something. However, it may be in a place which should not be reached and must be a return point. So, even if I handle the exceptions above, I'm still returning NULL or an empty string at some point in the code which should not be reached, often the end of the method/function. I've always managed to restructure the code so that it doesn't have to return NULL , since that absolutely appears to look like less than good practice. | It depends on whether you can deal with the exceptions that can be raised at this point or not. If you can handle the exceptions locally you should, and it is better to handle the error as close to where it is raised as possible. If you can't handle them locally then just having a try / finally block is perfectly reasonable - assuming there's some code you need to execute regardless of whether the method succeeded or not. For example (from Neil's comment ), opening a stream and then passing that stream to an inner method to be loaded is an excellent example of when you'd need try { } finally { } , using the finally clause to ensure that the stream is closed regardless of the success or failure of the read. However, you will still need an exception handler somewhere in your code - unless you want your application to crash completely of course. It depends on the architecture of your application exactly where that handler is. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/131397",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/12893/"
]
} |
131,403 | When programming Python I sometimes do a ** to make a conversion. I understand what it does but what data structures am I manipulating? A dict and what is the other? An array ? Is there a name for the ** operator? | It's not an operator as such, so it doesn't really have a name, but it is defined as a "syntactic rule" . So it should be called: "the keyword argument unpacking syntax" If you have a list of arguments, *args , it's called "argument unpacking" , in the same manner **kwargs is called "keyword argument unpacking" . If you use it on the left hand side of an = , as in a, *middle, end = my_tuple , you'd say "tuple unpacking" . In total, there are three types of (single parameter) arguments: def f(x) # x: positional argument
def f(x, y=0) # y: keyword argument
def f(x, *xs, y=0) # y: keyword-only argument The *args argument is called the "variable positional parameter" and **kwargs is the "variable keyword parameter".
Keyword-only arguments can't be given positionally, because a variable positional parameter will take all of the arguments you pass. Most of this can be found in PEPs 0362 and 3102 , as well as in the Control Flow section of the docs. It should be noted though that the function signature object PEP is only a draft, and the terminology might just be one person's idea. But they are good terms anyway. :) So the * and ** arguments just unpack their respective data structures: args = (1, 2, 3) # usually a tuple, always an iterable[1]
f(*args) → f(1, 2, 3)
# and
kwargs = {"a": 1, "b": 2, "c": 3} # usually a dict, always a mapping*
f(**kwargs) -> f(a=1, b=2, c=3) [1]: Iterables are objects that implement the __iter__() method and mappings are objects that implement keys() and __getitem__() . Any object that supports this protocol will be understood by the constructors tuple() and dict() , so they can be used for unpacking arguments. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/131403",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/12893/"
]
} |
131,446 | I am designing a new system and I want to know what inversion of control (IOC) is, and more importantly, when to use it. Does it have to be implemented with interfaces or can be done with classes? | IoC (see Inversion of Control on Wikipedia) is applicable in cases where a component can not perform a task entirely because it doesn't have some necessary information or functionality. The simplest example of an IoC pattern would be callback functions in C. For example you can declare the function: void Iterator(void *list, Func* f) Which iterates over list applying the f function to each of it's item. The Iterator function doesn't know how each item will be processed, you just provide a function as an argument, and it processes them. As the previous example shows, IoC allows you to decouple your program into separate components that don't know about each other. One of the most common versions of IoC is Dependecy Injection . In Dependency Injection each component must declare a list of dependencies required to perform it's task. At runtime a special component (generally) called an IoC Container performs binding between these components. It tries to provide values for published component dependencies. Here is an example in pseudo-code: class Foo
{
<Require Boo>Constructor(Boo boo){ boo.DoSomething }
} In this example class Foo has a constructor that requires argument of type Boo to perform some action. You could create an instance of class Foo using code similar to this: MyContainer.Create(typeof Foo) MyContainer - is an IoC Container , which takes care of getting instance of Boo and passing it to the Foo constructor. In summary, IoC allows you to decouple your program into separate parts. This is good because: Components can be easily tested independently. Program complexity can be reduced. You can switch components to another implementation. However in some cases, IoC can make code harder to understand. If you want to see good example of real-world usage of IoC , have a look at Mircosoft Composite UI Application Block and CompositeWPF I hope my explanation helps you. Regards, aku | {
"source": [
"https://softwareengineering.stackexchange.com/questions/131446",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/1292/"
]
} |
131,451 | I've been seeing a lot of references of Dependency Injection (DI) & Inversion Of Control (IOC), but I don't really know if there is a difference between them or not. I would like to start using one or both of them, but I'm a little confused as to how they are different. | Definitions Inversion of control is a design paradigm with the goal of reducing awareness of concrete implementations from application framework code and giving more control to the domain specific components of your application. In a traditional top down designed system, the logical flow of the application and dependency awareness flows from the top components, the ones designed first, to the ones designed last. As such, inversion of control is an almost literal reversal of the control and dependency awareness in an application. Dependency injection is a pattern used to create instances of classes that other classes rely on without knowing at compile time which implementation will be used to provide that functionality. Working Together Inversion of control can utilize dependency injection because a mechanism is needed in order to create the components providing the specific functionality. Other options exist and are used, e.g. activators, factory methods, etc., but frameworks don't need to reference those utility classes when framework classes can accept the dependency(ies) they need instead. Examples One example of these concepts at work is the plug-in framework in Reflector . The plug-ins have a great deal of control of the system even though the application didn't know anything about the plug-ins at compile time. A single method is called on each of those plug-ins, Initialize if memory serves, which passes control over to the plug-in. The framework doesn't know what they will do, it just lets them do it. Control has been taken from the main application and given to the component doing the specific work; inversion of control. The application framework allows access to its functionality through a variety of service providers. A plug-in is given references to the service providers when it is created. These dependencies allow the plug-in to add its own menu items, change how files are displayed, display its own information in the appropriate panels, etc. Since the dependencies are passed by interface, the implementations can change and the changes will not break the code as long as the contract remains intact. At the time, a factory method was used to create the plug-ins using configuration information, reflection and the Activator object (in .NET at least). Today, there are tools, MEF for one, that allow for a wider range of options when injecting dependencies including the ability for an application framework to accept a list of plugins as a dependency. Summary While these concepts can be used and provide benefits independently, together they allow for much more flexible, reusable, and testable code to be written. As such, they are important concepts in designing object oriented solutions. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/131451",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/-1/"
]
} |
131,466 | Let's take the Facebook application as an example. Why did they develop an application when the users could just access to their page and do the same? For me that represents more maintenance and more cost because for each feature added to the web application that feature will have to be added to the smartphone application as well. So why would I want to develop more than once (for each patform iOS, Android, etc) when I could just have one web application? What benefits do I get? The only one that comes to my mind is GPS feature. EDIT : My question is more oriented towards business applications that are going to be used only by some members of the company, it's not about selling the application (private use). So contrary to what some answers say about that by developing as a smartphone application it will benefit from more sells because of the "smartphone stores" for me this point is not important because the application is for private use. By developing the application as a web application it means that it can be accessed through smartphone browser and also in a PC (any capable browser), but developing as a native application would limit this to only some kind of smartphone so we would be limiting the use. On the other hand developing it as a web application means that in order to access the application an Internet connection must be available. So keeping this in mind how would you convince your boss to write the application for a given smartphone platform (iOS/Android) vs developing it as a web application? | There are several advantages of creating a native app: Better control over the UI experience - the mobile web developer would either need to recreate or use frameworks that emulate native UI artifacts Access to platform APIs that might not be available to web apps - this is currently the biggest advantage for native apps Potentially lower network usage at runtime - the native app would only need to access the network for data, while the web app might need to completely load at run-time. As you've noted, developers native apps do have the disadvantage of building and maintaining apps for multiple platforms. This factor might not be a significant disadvantage if the developer is focused on only one platform. Some discussions in blogs that you might be interested in reading: Apps vs the Web by Matt Gemmell Native vs Web Apps from MobiThinking | {
"source": [
"https://softwareengineering.stackexchange.com/questions/131466",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/34624/"
]
} |
131,636 | First of all I would like to make it clear that this is not a language-X-versus-language-Y question to determine which is better. I have been using Java for a long time and I intend to keep using it. Parallel to this, I am currently learning Scala with great interest: apart from minor things that take some getting used to my impression is that I can really work very well in this language. My question is: how does software written in Scala compare to software written in Java in terms of execution speed and memory consumption?
Of course, this is a difficult question to answer in general, but I would expect that higher level constructs such as pattern matching, higher-order functions, etc, introduce some overhead. However, my current experience in Scala is limited to small examples under 50 lines of code and I haven't run any benchmarks up to now. So, I have no real data. If it turned out that Scala does have some overhead wrt Java, does it make sense to have mixed Scala / Java projects, where one codes the more complex parts in Scala and the performance-critical parts in Java? Is this a common practice? EDIT 1 I have run a small benchmark: build a list of integers, multiply each integer by two and put it in a new list, print the resulting list. I wrote a Java implementation (Java 6) and a Scala implementation (Scala 2.9). I have run both on Eclipse Indigo under Ubuntu 10.04. The results are comparable: 480 ms for Java and 493 ms for Scala (averaged over 100 iterations). Here are the snippets I have used. // Java
public static void main(String[] args)
{
long total = 0;
final int maxCount = 100;
for (int count = 0; count < maxCount; count++)
{
final long t1 = System.currentTimeMillis();
final int max = 20000;
final List<Integer> list = new ArrayList<Integer>();
for (int index = 1; index <= max; index++)
{
list.add(index);
}
final List<Integer> doub = new ArrayList<Integer>();
for (Integer value : list)
{
doub.add(value * 2);
}
for (Integer value : doub)
{
System.out.println(value);
}
final long t2 = System.currentTimeMillis();
System.out.println("Elapsed milliseconds: " + (t2 - t1));
total += t2 - t1;
}
System.out.println("Average milliseconds: " + (total / maxCount));
}
// Scala
def main(args: Array[String])
{
var total: Long = 0
val maxCount = 100
for (i <- 1 to maxCount)
{
val t1 = System.currentTimeMillis()
val list = (1 to 20000) toList
val doub = list map { n: Int => 2 * n }
doub foreach ( println )
val t2 = System.currentTimeMillis()
println("Elapsed milliseconds: " + (t2 - t1))
total = total + (t2 - t1)
}
println("Average milliseconds: " + (total / maxCount))
} So, in this case it seems that the Scala overhead (using range, map, lambda) is really minimal, which is not far from the information provided by
World Engineer. Maybe there are other Scala constructs that should be used with care because they are particularly heavy to execute? EDIT 2 Some of you pointed out that the println's in the inner loops take up most of
the execution time. I have removed them and set the size of the lists to 100000 instead of 20000. The resulting average was 88 ms for Java and 49 ms for Scala. | There's one thing that you can do concisely and efficiently in Java that you can't in Scala: enumerations. For everything else, even for constructs that are slow in Scala's library, you can get efficient versions working in Scala. So, for the most part, you don't need to add Java to your code. Even for code that uses enumeration in Java, there's often a solution in Scala that is adequate or good -- I place the exception on enumerations that have extra methods and whose int constant values are used. As for what to watch out for, here are some things. If you use the enrich my library pattern, always convert to a class. For example: // WRONG -- the implementation uses reflection when calling "isWord"
implicit def toIsWord(s: String) = new { def isWord = s matches "[A-Za-z]+" }
// RIGHT
class IsWord(s: String) { def isWord = s matches "[A-Za-z]+" }
implicit def toIsWord(s: String): IsWord = new IsWord(s) Be wary of collection methods -- because they are polymorphic for the most part, JVM does not optimize them. You need not avoid them, but pay attention to it on critical sections. Be aware that for in Scala is implemented through method calls and anonymous classes. If using a Java class, such as String , Array or AnyVal classes that correspond to Java primitives, prefer the methods provided by Java when alternatives exist. For example, use length on String and Array instead of size . Avoid careless use of implicit conversions, as you can find yourself using conversions by mistake instead of by design. Extend classes instead of traits. For example, if you are extending Function1 , extend AbstractFunction1 instead. Use -optimise and specialization to get most of Scala. Understand what is happening: javap is your friend, and so are a bunch of Scala flags that show what's going on. Scala idioms are designed to improve correctness and make the code more concise and maintainable. They are not designed for speed, so if you need to use null instead of Option in a critical path, do so! There's a reason why Scala is multi-paradigm. Remember that the true measure of performance is running code. See this question for an example of what may happen if you ignore that rule. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/131636",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/29020/"
]
} |
131,746 | I am the lead of a small team where everyone has less than a year of software development experience. I wouldn't by any means call myself a software guru, but I have learned a few things in the few years that I've been writing software. When we do code reviews I do a fair bit of teaching and correcting mistakes. I will say things like "This is overly complex and convoluted, and here's why," or "What do you think about moving this method into a separate class?" I am extra careful to communicate that if they have questions or dissenting opinions, that's ok and we need to discuss. Every time I correct someone, I ask "What do you think?" or something similar. However they rarely if ever disagree or ask why. And lately I've been noticing more blatant signs that they are blindly agreeing with my statements and not forming opinions of their own. I need a team who can learn to do things right autonomously, not just follow instructions. How does one correct a junior developer, but still encourage him to think for himself? Edit:
Here's an example of one of these obvious signs that they're not forming their own opinions: Me: I like your idea of creating an extension method, but I don't like how you passed a large complex lambda as a parameter. The lambda forces others to know too much about the method's implementation. Junior (after misunderstanding me): Yes, I totally agree. We should not use extension methods here because they force other developers to know too much about the implementation. There was a misunderstanding, and that has been dealt with. But there was not even an OUNCE of logic in his statement! He thought he was regurgitating my logic back to me, thinking it would make sense when really he had no clue why he was saying it. | Short Answer: Engage them (put the puzzle in their mind), empower them (trust their answers). It is the question that drives us! - Matrix. Generally, in my observations, that juniors have their own world - their own limited view of how they think and in some part their own enthusiasm/favorites/opinions about things. There is nothing wrong about telling them head-on that you are wrong - but best is that you make them think. Why? Are there any other ways? Are there better ways to do the same thing? One of the anecdotes I always use is - "Give me three solutions (to this problem)!" By the time they think about these solutions, they begin to realize many issues. It takes them some investment of time - but over time they tend to visualize the limitations and shortcomings of their thinking. They begin to see that more as "I didn't think of it!" which is much better than going home with the feeling that "I was wrong!" or even worse "I was told/proven wrong even when I had valid viewpoints" . In general, very young kids will tend to be more adept related to technical issues (such as which design pattern works better!) over process issues, but over time when you coach them, it works. However they rarely if ever disagree or ask why. And lately I've been
noticing more blatant signs that they are blindly agreeing with my
statements and not forming opinions of their own. This generally is an outcome that you do take their suggestions but later overrule them and they are equally unconvinced about your views; just because you are senior they are avoiding a fight! The best thing I learnt from one of my past bosses: He will ask team members to debate first (they feel fairly equal here), and hopefully after all the arguments being exhausted, he would enter the room with only one question - "What were the points of disagreement?" - The point is, people always like to participate in debates and discussion, but if their (valid) points are not taken up to action next time they feel it's not worth it to participate in discussion. Not only in software, but everywhere ultimately only the most empowered teammates will dare to reply let alone question the system. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/131746",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/7935/"
]
} |
131,814 | I'm looking at diving into Haskell for my next (relatively trivial) personal project. The reasons that I'm tackling Haskell are: Get my head into a purely functional language Speed. While I'm sure this can be argued, profiling that I've seen nails Haskell close to C++ (and seems to be quite a bit faster than Erlang). Speed. The Warp web server seems to be crazy fast in comparison to virtually everything else . So, given this, what I'm looking for are the downsides or problems that come along with Haskell. The web has a tremendous amount of information about why Haskell is a Good Thing, but I haven't found many topics about its ugly side (apart from gripes about its syntax which I don't care about at all). An example of what I'm looking for could be like Python's GIL. Something that didn't rear its head until I really started looking at using concurrency in a CPython environment. | A few downsides I can think of: Due to the language's nature and its firm roots in the academic world, the community is very math-minded; if you're a pragmatic person, this can be overwhelming at times, and if you don't speak the jargon, you'll have a harder time than with many other languages. While there is an incredible wealth of libraries, documentation is often terse. Gentle entry-level tutorials are few and hard to find, so the initial learning curve is pretty steep. A few language features are unnecessarily clumsy; a prominent example is how record syntax does not introduce a naming scope, so there is no way to have the same record field name in two different types within the same module namespace. Haskell defaults to lazy evaluation, and while this is often a great thing, it can bite you in nasty ways sometimes. Using lazy evaluation naively in non-trivial situations can lead to unnecessary performance bottlenecks, and understanding what's going on under the hood isn't exactly straightforward. Lazy evaluation (especially combined with purity and an aggressively optimizing compiler) also means you can't easily reason about execution order; in fact, you don't even know whether a certain piece of code actually gets evaluated in a given situation. Consequently, debugging Haskell code requires a different mindset, if only because stepping through your code is less useful and less meaningful. Because of Haskell's purity, you can't use side effects to do things like I/O; you have to use a monad and 'abuse' lazy evaluation to achieve interactivity, and you have to drag the monadic context around anywhere you might want to do I/O. (This is actually a good feature in many ways, but it makes pragmatic coding impossible at times.) | {
"source": [
"https://softwareengineering.stackexchange.com/questions/131814",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/23901/"
]
} |
131,841 | My customer, a translations business owner, just told me that he has been reading about Ruby on Rails and told me that " there are more PHP guys around there " and " it seems the community prefers it ".
What would you, as software engineer and freelancer, say to the customer to achieve these goals: Sell Make him see that the technology is my expert decision and Rails
is as good or better than PHP (+ whatever framework) for this
particular project. UPDATE:
Thank you all for the suggestions! Tomorrow I've got another meeting with him, let's see how it goes, I will update again :) UPDATE 2:
Finally I told him to read this thread and the result has been fantastic: He gave me the project and we are going to start right now. Thank you all for the help, you have free beer in my charge if we see someday :) BTW: I learned the lesson: be as transparent as possible, because if you believe in yourself and your work, there is no question compromising enough to beat you. regards | I think you make a mistake in assuming that the choice of technology is a purely technical decision. The customer seems to be concerned about the business implications of picking a particular technology. Given that, you need to present a case that addresses his business concerns at least as heavily as your technology opinions. Employers have to recruit from a particular geographic area and certain areas have particularly active communities around particular technology stacks. If you're starting a business in the Pacific Northwest of the US, for example, there would be a strong bias towards a Microsoft stack simply because Microsoft is very influential in the area so most of the developers you'd be looking to hire would have experience with that stack. Other geographical regions have very different profiles. Talk with your customer and understand why and how he formed his opinion. Perhaps he read that the local PHP community is particularly active or that the local college teaches a lot of PHP and no Ruby. Perhaps he's got a trusted developer that he can call in for the occasional emergency that is a PHP pro and a Ruby neophyte. Of course, it's also possible that he's using poor metrics like the number of job ads or resumes that mention various keywords. Employers have to be concerned with the long-term sustainability of technology stacks. Years ago, for example, lots of companies invested a great deal of time and effort building PowerBuilder apps (and other languages of that genre). PowerBuilder often made it very easy to build line of business apps and developers at the time were often quite enamored with it. Unfortunately, the PowerBuilder community more or less collapsed leaving companies in a situation where they had a lot of existing code in a language no one really wanted to use where they had difficulty getting competent developers to maintain the existing code and expensive, time-consuming projects to migrate those apps to other technology stacks. The relative technical merits of PowerBuilder were vs. Java or C++ or C# or whatever they migrated to at that point; it was a death spiral since developers didn't want to get stuck working in a language that companies wanted to migrate away from and companies saw the lack of developers as a sign they should redouble their migration efforts to ensure they had the capacity to do the development the business needed. Relatively niche languages like Ruby absolutely have the potential to create these sorts of legacy problems for companies who can't predict whether the language is going to fizzle out in a few years when people move on to the next fad or if it has real staying power. You can certainly mitigate this by pointing out that Ruby isn't dependent on one company or organization so no one can decide it is no longer a strategic product for the company. If your customer has been burned in the past by having applications developed in languages that became business headaches, you'll need to make a case that Ruby is more like Linux and other open source technologies that flourished without a company backing them than languages that have died out over the years. Employers want consistency in the environment so choosing a language for one project forces a choice for many others. Even if Ruby is technically ideal for the project you're pitching, you have to explain why it's appropriate for every other application this customer is going to need developed or explain what mix of technologies you believe are appropriate (i.e. Ruby for X, something else for Y). Dealing with heterogeneous technologies, however, inevitably translates into extra cost for the business. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/131841",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/46082/"
]
} |
131,852 | I stumbled upon a blog entry discouraging the use of Strings in Java for making your code lack semantics, suggesting that you should use thin wrapper classes instead. This is the before and after examples the said entry provides to illustrate the matter: public void bookTicket(
String name,
String firstName,
String film,
int count,
String cinema);
public void bookTicket(
Name name,
FirstName firstName,
Film film,
Count count,
Cinema cinema); In my experience reading programming blogs I've come to the conclusion that 90% is nonsense, but I'm left wondering whether this is a valid point. Somehow it doesn't feel right to me, but I couldn't exactly pinpoint what is amiss with the style of programming. | Encapsulation is there to protect your program against change . Is the representation of a Name going to change? If not, then you're wasting your time and YAGNI applies. Edit: I've read the blog post and he has a fundamentally good idea. The problem is that he's scaled it way too far. Something like String orderId is really bad, because presumably "!"£$%^&*())ADAFVF is not a valid orderId . This means that String represents a lot more possible values than there are valid orderId s. However, for something like a name , then you can't possibly predict what might or might not be a valid name, and any String is a valid name . In the first instance, you are (correctly) reducing the possible inputs to only valid ones. In the second instance, you have achieved no narrowing of possible valid inputs. Edit again: Consider the case of invalid input. If you write "Gareth Gobulcoque" as your name, it's going to look silly, it's not going to be the end of the world. If you put in an invalid OrderID, chances are that it's simply not going to function. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/131852",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/41085/"
]
} |
131,933 | I've used JavaScript and some frameworks (jQuery, Prototype, some node.js) for client-side web programming, but never on the desktop, where I do most of my scripting work in either Python or Bash. But IMHO, JavaScript would make a great scripting language if used outside of the browser. Has anyone tried this? Can JavaScript be an adequate replacement for Python/Perl/Bash for quick and dirty scripting tasks? | Yes! You definitely can do that with Node.js or Rhino. For example the coffeescript compiler is nothing but a node.js script. I will admit that it is not generally my first choice for desktop scripting but I see no reason why it would not work quite well for a number of tasks. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/131933",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/38762/"
]
} |
131,938 | Problem : It seems with almost every development effort I'm involved in, no matter how much time is spent planning prior to starting development, there is always a large amount of changes required either midway or towards the end of the project. These are sometimes big changes which require alot of re-development. I don't work for clients who pay money, this is an in-house development team on in-house development websites. So, it's not like I can charge for it or anything. And at the end of the day, we have to try to hit deadlines. Questions : What are some of the best ways you guys have found to minimize and prevent spec changes from cropping up midway or after development? | There's a famous military saying, attributed to Helmut von Moltke: "No battle plan survives contact with the enemy". In the same vein, I do not think it's possible to make a spec that will not have to be changed - not unless you can predict the future and read minds of the stakeholders (even then they may not have yet made their minds, even if they claim they did).
I would suggest instead approach it in a number of ways: Make a clear distinction what can be changed and what can not. Communicate it clearly to the stakeholders, make them explicitly sign off on unchangeable things as soon as possible. Prepare for the change in advance. Use code methodologies that allow to change the changeable parts easier, invest in configurability, encapsulation and clear protocols that would allow parts to be changed and replaced independently. Talk to the stakeholders frequently, solicit feedback and approval. This would both keep you in sync and avoid them claiming "oh, that's not what we wanted" when it's too late. As noted in other answers, agile methodologies and frequent mini-releases would help you with that. Put into the schedule the time to accomodate the inevitable changes. Don't be afraid to say "we will need more time" early if you think you would - if the schedule you're given is unrealistic it's better to know it (and have you on the record saying that) at the start than at the end. If the changes are too extensive and threaten the deadline - push back and say something like "this change is possible, but will push the deadline by X time, make your choice". Make a formal process of requesting changes, prioritizing changes and assigning changes to versions or releases. If you could tell people "I can not do it in this release, but will be happy to put it on schedule for the next one", it's much better than saying them "you're too late, your change can't go in, goodbye" and would make them your friend - they'd be happy for you to release in time so you could be free sooner to get to the next release which will have their change - and not your enemy. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/131938",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/46101/"
]
} |
131,983 | I've been asked to evaluate what appears to be a substantial legacy codebase, as a precursor to taking a contract maintaining that codebase. This isn't the first time I've been in this situation. In the present instance, the code is for a reasonably high-profile and fairly high-load multiplayer gaming site, supporting at least several thousand players online at once. As many such sites are, this one is a mix of front- and back-end technologies. The site structure as seen from the inside out, is a mess. There are folders suffixed "_OLD" and "_DELETE" lying all over the place. Many of the folders appear to serve no purpose, or have very cryptic names. There could be any number of old, unused scripts lying around even in legitimate-looking folders. Not only that, but there are undoubtedly many defunct code sections even in otherwise-operational scripts (a far less pressing concern). This is a handover from the incumbent maintainers, back to the original developers/maintainers of the site. As is understandably typical in these sorts of scenarios, the incumbent wants nothing to do with the handover other than what is contractually and legally required of them to push it off to the newly-elected maintainer. So extracting information on the existing site structure out of the incumbent is simply out of the question. The only approach that comes to mind to get into the codebase is to start at the site root and slowly but surely navigate through linked scripts... and there are likely hundreds in use, and hundreds more that are not. Given that a substantial portion of the site is in Flash, this is even less straightforward since, particularly in older Flash applications, links to other scripts may be embedded in binaries (.FLAs) rather than in text files (.AS/ActionScript). So I am wondering if anyone has better suggestions as to how to approach evaluating the codebase as a whole for maintainability. It would be wonderful if there were some way to look at a graph of access frequency to files on the webserver's OS (to which I have access), as this might offer some insight into which files are most critical, even though it wouldn't be able to eliminate those files that are never used (since some files could be used just once a year). | Since what you're being asked to do is provide input for your client to write an appropriate proposal to the other client (owner-of-the-nightmare-code) for any work on that code, I'm going to go out on a limb and say that you're not going to be doing any thorough testing or refactoring or anything along those lines at this point. You probably have a very short time to get a rough estimate. My answer is based on my experience in the same situation, and so if my interpretation is incorrect, just disregard everything that follows. Use a spidering tool to get a sense of what pages are there, and
what is inbound. Even a basic linkchecker tool -- not a specific
"spider for auditing purposes" tool -- will be useful in this regard. Make a basic audit/inventory spreadsheet. This could be as simple as
a list of files and their last-modified time, organized by directory.
This will help you get a sense of scope, and when you get to
directories like _OLD and _DELETE you can make a big note that a)
your evaluation is based on stuff not in those directories b) the
presence of those directories and the potential for cruft/hidden
nightmares attests to deeper issues that should be accounted for in
your client's bid , in some way. You don't have to spend a gazillion
years enumerating the possible issues in _OLD or _DELETE; the info
will feed into the eventual bid. Given you are reviewing what sounds like an entirely web-based app,
even standard log analyzer tools are going to be your friend. You
will be able to add to the spreadsheet some sense of "this is in the
top 10 of accessed scripts" or some such. Even if the scripts are
embedded in Flash files and therefore not spiderable, there's a high
probability they are accessed via POST or GET, and will show up in
the server logs. If you know you have 10 highly accessed scripts,
not 100 (or vice versa), this will give you a good idea of how
maintenance work will likely go. Even in a complicated site, what I outlined above is something you could do in a day or day and a half. Since the answer you're going to give to your client is something like "this is going to be a tremendous pain in the butt, and here are some reasons why you'll just be putting lipstick on a pig, so you should bid accordingly" or "any reasonable person would bid not to maintain but to start over, so you should bid accordingly" or even "this isn't that bad, but it will be a consistent stream of work over any given timeframe, so bid accordingly", the point is that they're going to be making the bid and thus you do not need to be as precise as you would be if you were being hired directly to do a full content and architecture audit. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/131983",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/24478/"
]
} |
131,995 | What are the pros/cons (if any) to using string output;
int i = 10;
output = string.Format("the int is {0}", i); versus string output;
int i = 10;
output = "the int is " + i; I have always used the latter example, but it seems as though a good majority of online tutorials use the string.format example. I don't think that there are any real differences in terms of efficiency, my initial thought is so a coder doesn't have to keep breaking the string to insert variables. | If you consider translation to be important in your project, the first syntax will really help with it. For instance you may have: static final string output_en = "{0} is {1} years old.";
static final string output_fr = "{0} a {1} ans.";
int age = 10;
string name = "Henri";
System.out.println(string.Format(output_en, name, age));
System.out.println(string.Format(output_fr, name, age)); Also note that your variables may not always be in the same place in the sentence with that syntax: static final string output_yoda = "{1} years {0} has."; | {
"source": [
"https://softwareengineering.stackexchange.com/questions/131995",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/6854/"
]
} |
132,014 | Background I have recently been tasked with designing a rebuild of an existing .NET web application that currently uses a third-party company to handle large file transfers (as big as 50Gb). Currently, the .NET app depends on a .JAR (Java Applet) provided by this third-party which is called up inside of an iFrame and exposes the appropriate file-system interaction for selecting entire directories for upload and so forth. I realize that so far all of this is possible using some combination of .NET networking classes (ftp) and Flash or Silverlight for client access. I have been told that the reason that the third-party plugin is so special is that it uses UDP protocol so that if an upload or download is interrupted, it can be resumed later right where it left off. I have also been told that the third-party tool suite allows the IT folks to throttle bandwith (I don't even know what that means) and do a couple of other cool things. Question Assuming that we will use the latest version of C# and and the .NET framework (4.0), is it reasonably possible to replicate this UDP-based behavior? By reasonable, I mean could it be accomplished in less than, say, 240 dev hours. Please note that the rebuilt app will ideally use all Microsoft technologies (including Silverlight for client access) and will run on Azure. | If you consider translation to be important in your project, the first syntax will really help with it. For instance you may have: static final string output_en = "{0} is {1} years old.";
static final string output_fr = "{0} a {1} ans.";
int age = 10;
string name = "Henri";
System.out.println(string.Format(output_en, name, age));
System.out.println(string.Format(output_fr, name, age)); Also note that your variables may not always be in the same place in the sentence with that syntax: static final string output_yoda = "{1} years {0} has."; | {
"source": [
"https://softwareengineering.stackexchange.com/questions/132014",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/44497/"
]
} |
132,019 | Background I am not a big fan of abstraction. I will admit that one can benefit from adaptability, portability and re-usability of interfaces etc. There is real benefit there, and I don't wish to question that, so let's ignore it. There is the other major "benefit" of abstraction, which is to hide implementation logic and details from users of this abstraction. The argument is that you don't need to know the details, and that one should concentrate on their own logic at this point. Makes sense in theory. However, whenever I've been maintaining large enterprise applications, I always need to know more details. It becomes a huge hassle digging deeper and deeper into the abstraction at every turn just to find out exactly what something does; i.e. having to do "open declaration" about 12 times before finding the stored procedure used. This 'hide the details' mentality seems to just get in the way. I'm always wishing for more transparent interfaces and less abstraction. I can read high level source code and know what it does, but I'll never know how it does it, when how it does it, is what I really need to know. What's going on here? Has every system I've ever worked on just been badly designed (from this perspective at least)? My philosophy When I develop software, I feel like I try to follow a philosophy I feel is closely related to the ArchLinux philosophy : Arch Linux retains the inherent complexities of a GNU/Linux system, while keeping them well organized and transparent. Arch Linux developers and users believe that trying to hide the complexities of a system actually results in an even more complex system, and is therefore to be avoided. And therefore, I never try to hide complexity of my software behind abstraction layers. I try to abuse abstraction, not become a slave to it. Question at heart Is there real value in hiding the details? Aren't we sacrificing transparency? Isn't this transparency valuable? | The reason for hiding the details isn't to keep the details hidden; it's to make it possible to modify the implementation without breaking dependent code. Imagine that you've got a list of objects, and each object has a Name property and other data. And a lot of times, you need to find an item in the list whose Name matches a certain string. The obvious way is to loop over each item one by one, and check to see if Name matches the string. But if you find that that's taking far too much time, (as it would if you have several thousand items in the list,) you might want to replace it with a string-object dictionary lookup. Now if all of your lookups were done by retrieving the list and looping over it, you've got a huge amount of work to do to fix this. It's even tougher if you're in a library and third-party users are using it; you can't go out and fix their code! But if you had a FindByName method to encapsulate the process of the name lookup, you can simply change the way it's implemented and all the code that calls it will continue working, and get a lot faster for free. That's the real value of abstraction and encapsulation. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/132019",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/32448/"
]
} |
132,067 | I would like to know what differentiates a Service class from a utility class or a helper class?
A class only with underlying methods calls the dao's is a service? Doesn't the usage of Helper classes violates SRP? | The lines can be a little blurry, but I see it this way: A Service class/interface provides a way of a client to interact with some functionality in the application. This is typically public, with some business meaning. For example, a TicketingService interface might allow you to buyTicket , sellTicket and so on. A helper class tends to be hidden from the client and is used internally to provide some boiler plate work that has no business domain meaning. For example, let's say you wanted to convert a date into a timestamp in order to save it to your particular datastore. You might have a utility class called DateConvertor with a convertDateToTimestamp method that performs this processing. Services are not simply tightly coupled to DAOs, it's a broader term/usage pattern than persistence Helper classes do not violate SRP if coded in accordance with that principle. That is, each method should do one thing and one thing well, the class should perform one type of utility help, (e.g. Date conversion) and do that well. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/132067",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/36181/"
]
} |
132,121 | I have been working for a big company (8000+ employees) for almost 2 years now, and was hired just after I finished my study course. Everyone here has to deal daily with legacy code which is often very badly designed and full of hacks. At first, I kept a low profile, trying not to criticize things too much. But the situation, as it stands, has become very difficult to live with and it seems no one is willing to improve/replace the tools we use. To be more explicit we have: An obsolete source control tool (Visual SourceSafe) Plain old makefiles which only support full rebuild .def files which must be maintained manually and separately for all existing architectures monolithic headers files and projects with very few different files (but each has around 3000 lines of code, which sometimes takes care of very different tasks) no use of the "new" languages facilities (well std::string is not that new but nobody except me uses it) I decided, a few months ago to do something about it, by designing a new compilation environment. I could get incremental builds to work reliably, faster compilation times, better structured projects, automatic .def files generation. I even created a bridge from/to Git to/from Visual SourceSafe. I showed my achievements to several collegues and our boss but it was like nobody cared. They were all like "Well... people are used to do it that way now. Why would we change things ?" The changes I suggested were designed so that we could have a soft transition from the old system to the new one. Each improvement could be applied separately and safely. I even tried to get some of my coworkers involved in the changes. But so far, no success. Have you already faced a similar situation ? What can one do when "lead by example" doesn't work ? | Aim for the head : "Lead by example" should have improvement in mind, but it should be targeted on people not on technology. Maybe you have invested too much time in improving technology, but not enough time in what is going on in their heads. Think about the driving factors why there is an opposition for new things. In many cases they just fear some risk. Identify those risks and find counterarguments for them. Grab the fresh meat : It is easier to win over employees who want to change things. You notice them immediately when you see them. Avoid the rotten meat : Some will never sympathize with your ideas. Leave them aside. Grow to a critical mass : Find people who sympathize with your ideas. Win the over one by one. At some point if a critical mass is reached, more and more people will follow your example voluntarily. Management vocabulary : Managers are not interested in better designs. Their language is money and time. Make clear how much man hours are wasted for bugs. Make clear that unsatisfied customers who encounter bugs are not profitable. Demonstrate how much faster you can implement a new feature. You need to choose another vocabulary for managers. It is all about processes : Better technologies do not make better programmers and programs. If you have good running processes, even outdated technologies lead to good results. Think about were effort and time is wasted. Maybe it is not the technology, but something in the processes is going awfully wrong. In most cases it is a lack of communication. Find a new company : You already have done a lot. You can still try to improve things, but it is also up to you to decide how long you want to try it and how much energy you want to invest. Keep in mind: Even if you cannot achieve a lot of improvement, you will learn a lot out of your efforts. At some point you need to move on. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/132121",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/13384/"
]
} |
132,275 | I've been thrown head-first into a new job developing web applications in PHP. I'm by no means new to PHP, but I haven't developed large-scale applications before. I'm wondering how to structure my development to avoid setting myself up for problems in the future. How do I design and architect my applications in a sound way that allows them to scale over time in terms of functionality and performance. I'm thinking of things like: Separating back end from front end Directory structures I would appreciate pointers to architectural and application design patterns, frameworks and methods that allow me to approach large-scale PHP web application development in a sustainable way. | A rough diagram of the architecture of the latest large scale project I was involved in. It's only a basic outline, adapted from the actual architecture documents and presented in a way that resembles a typical n-tier approach combined with a typical MVC approach . As you can see the logic and data tiers are connected via a service layer, and more specifically a REST API , that was inspired by Recess , a lesser known PHP framework. Don't reinvent the wheel I work with three frameworks: Zend Framework The behemoth of PHP frameworks, with an impressively well written codebase and extensive list of features. On large scale applications you'll find yourself tweaking the framework more often than not, and I find ZF's codebase the most pleasant to work with. But beware, it's not an entry level framework . Kohana Kohana started out as a fork of CodeIgniter, and that was reason enough for me not to use it, initially. Nowadays it has grown into a solid and elegant framework that differentiates itself from every other by following an Hierarchical MVC approach . HMVC allows for a greater extend of modularization than MVC . For the project in the diagram I adapted Kohana's HMVC to ZF, but I've started using Kohana for smaller projects and considering it for larger as well. CodeIgniter I only use it because of a legacy project I inherited, avoid if possible. As the other answers pointed out, an ORM always comes handy. I use Doctrine extensively, and you should take a look at its brand new mappers for CouchDB and MongoDB . Scalability is a must on large scale applications and you should evaluate NoSQL solutions . All that said, the important thing to remember is that larger applications usually have unique challenges. You should evaluate every popular third party solution there is, and you will probably gain a lot from a couple of obscure ones. When I first evaluated Recess it was far from production ready but its approach essentially made it into the project. Performance On typical websites you may get away with simple output caching and opcode caching but on large scale applications you should really consider memory caching, that most commonly is build around memcached . xdebug is mostly known as a debugger, but can serve as a profiler as well. I've recently started using Zend Server and I absolutely adore its code tracing features . Unfortunately those are not available in the Community Edition , but xdebug is a
pretty decent alternative. If you are using Apache, make sure to optimize the hell out of it . nginx and lighttpd are apparently better choices , performance wise, but I haven't used them a lot and I can't really say. As for the database, Doctrine's query & result caching works wonders, especially combined with memcached . And of course, we can't forget about the front end. Yahoo's Exceptional Performance team has assembled an extensive list of best practices . I'm not really a front end developer, but I've seen amazing results on solo projects. Lastly PHP has a brand new garbage collection mechanism , worth looking into. Security The world of PHP security is chaotic, to say the least. I'm no expert, so treat the following as generic tips: Open Web Application Security Project Lot's of good stuff in there, but for a quick overview you should start with the top ten list . And research PHP solutions for those common vulnerabilities. Stack vulnerabilities A good habit is to periodically monitor PHP's open bugs . Even if you are no expert yourself, there are almost always workaround tips on security threats. And of course, you should extend the habit to every other part of the stack, especially the most vulnerable ones, like the web server and the database. The crowd over at IT Security Stack Exchange can help you with more educated answers. Further reading What should every programmer know about web development? Is it possible to effectively develop PHP applications on Windows that will be deployed on servers running Linux? Towards RESTful PHP - 5 Basic Tips Is ORM an Anti-Pattern? Writing Secure PHP | {
"source": [
"https://softwareengineering.stackexchange.com/questions/132275",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/46257/"
]
} |
132,281 | I am proficient in C, and I am learning C++ right now. I always played with websites (HTML/CSS), and I was wondering if it would be viable/practical to create some simple web apps using C and/or C++. For C it would be via CGI scripts, as explained in this tutorial for example -> http://www.cs.tut.fi/~jkorpela/forms/cgic.html For C++ it could be via a web toolkit like Wt -> http://www.webtoolkit.eu/wt Notice that I know it's possible (as explained in this previous question -> Can C++ be used as a server-side web development language? ). What I am asking is if it's viable/practical (i.e., not a nightmare). I am not keen on picking up another language yet as I just started with C++, but if it would make web dev incredibly easier I will consider it (or I'll postpone my web dev projects until I am ready to pick up a new language). Also, in case you think it's viable/practical, recommendations on the route to follow would be highly appreciated (i.e., forget C and go with C++, what framework to use and so on). | C++ as a web server language is a good idea under some constraints only . From the CPPCMS website : When CppCMS Should Be Used. C++ language is far from being popular for Web development for many
reasons: lack of appropriate tools, skills of developers and many
more. However, there are areas where C++ web programming with CppCMS becomes
very useful and efficient, and some where it is just a waste of time. When CppCMS should or can be used? High load web sites and application with hundreds and thousands hits
per second, where high performance, efficiency and scalability is
required. Application that require scalable Comet/Server Push1
technologies --- CppCMS can efficiently handle hundreds and thousands
simultaneous HTTP connection with minimal resources usage. Embedding web interface2 into existing C++ applications/services with a small
cost of additional library. Embedded underpowered devices -- CppCMS
allows creation of rich applications with relatively low cost of
hardware that would perform reasonably fast. When Not To Use? If you create small web applications that do not require high loads
and require very short time-to-market period -- probably tools like
Django or RoR would be more appropriate for such tasks. Also, take a look at the rational for the need of CPPCMS . If you build your personal blog, create small or even medium community
or building a web site for a small company --- CppCMS is not for you.
Take any of existing and good CMS like Drupal or develop with great
Django framework, you'll be fine. However, when the loads become more then average, the process of
scale-up using existing web frameworks may be painful: Low performance of dynamic or JIT languages enforces you to add more
servers even on quite small loads. The caching becomes more
complicated and less efficient because the system becomes distributed
and does not scale-up linearly. Creation of such system requires
skilled stuff and costs even more. CppCMS allows you to increase the
performance of typical system by an order of magnitude and thus: Remove requirement of maintaining a big server farm where few servers
or even single one would solve the load problems. Reduce maintenance
costs and power consumption. Now, if you think you'll make this kind of website, it might be of interest. Note that most devs will not need such power. If you want to do something that might really be power consuming, and in a way that is obvious, not because you pessimize, then why not use C++. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/132281",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/34340/"
]
} |
132,309 | I have noticed that most functional languages employ a singly-linked list (a "cons" list) as their most fundamental list types. Examples include Common Lisp, Haskell and F#. This is different to mainstream languages, where the native list types are arrays. Why is that? For Common Lisp (being dynamically typed) I get the idea that the cons is general enough to also be the base of lists, trees, etc. This might be a tiny reason. For statically typed languages, though, I can't find a good reasoning, I can even find counter-arguments: Functional style encourages immutability, so the linked list's ease of insertion is less of an advantage, Functional style encourages immutability, so also data sharing; an array is easier to share "partially" than a linked list, You could do pattern matching on a regular array just as well, and even better (you could easily fold from right to left for example), On top of that you get random access for free, And (a practical advantage) if the language is statically typed, you can employ a regular memory layout and get a speed boost from the cache. So why prefer linked lists? | The most important factor is that you can prepend to an immutable singly linked list in O(1) time, which allows you to recursively build up n-element lists in O(n) time like this: // Build a list containing the numbers 1 to n:
foo(0) = []
foo(n) = cons(n, foo(n-1)) If you did this using immutable arrays, the runtime would be quadratic because each cons operation would need to copy the whole array, leading to a quadratic running time. Functional style encourages immutability, so also data sharing; an array is easier to share "partially" than a linked list I assume by "partially" sharing you mean that you can take a subarray from an array in O(1) time, whereas with linked lists you can only take the tail in O(1) time and everything else needs O(n). That is true. However taking the tail is enough in many cases. And you have to take into account that being able to cheaply create subarrays doesn't help you if you have no way of cheaply creating arrays. And (without clever compiler optimizations) there is no way to cheaply build-up an array step-by-step. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/132309",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/12693/"
]
} |
132,314 | I'm developing a web application with a strong focus on security. What measures can be taken to prevent those who work on the application (programmers, DBAs, quality assurance staff) from capturing user entered values that should be well-protected, such as passwords, social security numbers, and so forth? | This is quite simple. Banks do it all the time. You have three groups of people involved. These are security groups. With distinct authorizations. Developers cannot assign security authorizations and cannot see production data. Operators cannot assign security authorizations and cannot create software. Security folks who set the authorizations and can neither create software nor operate the software. The developers create software. The operators install it and operate it. The security folks assure that the two groups are kept separated. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/132314",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/14294/"
]
} |
132,331 | Help! I have a question where I need to analyze the Big-O of an algorithm or some code. I am unsure exactly what Big-O is or how it relates to Big-Theta or other means of analyzing an algorithm's complexity. I am unsure whether Big-O refers to the time to run the code, or the amount of memory it takes (space/time tradeoffs). I have Computer Science homework where I need to take some loops, perhaps a recursive algorithm, and come up with the Big-O for it. I am working on a program where I have a choice between two data structures or algorithms with a known Big-O, and am unsure which one to choose. How do I understand how to calculate and apply Big-O to my program, homework, or general knowledge of Computer Science? Note: this question is a canonical dupe target for other Big-O
questions as determined by the community . It is intentionally
broad to be able to contain a large amount of useful information for
many Big-O questions. Please do not use the fact that it is this broad
as an indication that similar questions are acceptable. | The O(...) refers to Big-O notation, which is a simple way of describing how many operations an algorithm takes to do something. This is known as time complexity . In Big-O notation, the cost of an algorithm is represented by its most costly operation at large numbers. If an algorithm took n 3 + n 2 + n steps, it would be represented O(n 3 ). An algorithm that counted each item in a list would operate in O(n) time, called linear time. For a list of the names and classic examples on Wikipedia: Orders of common functions Related material: Plain English explanation of Big O (SO) Understanding of big-O massively improved when I began thinking of orders as sets. How to apply the same approach to big-Theta? Is there a system behind the magic of algorithm analysis? (CS.SE) A beginner's guide to Big O notation Algorithms: Design and Analysis | {
"source": [
"https://softwareengineering.stackexchange.com/questions/132331",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/35459/"
]
} |
132,369 | Assume an interface containing these methods : Car find(long id);
List<Car> find(String model); Is it better to rename them like this? Car findById(long id);
List findByModel(String model); Indeed, any developer who use this API won't need to look at the interface for knowing possible arguments of initial find() methods. So my question is more general :
What is the benefit of using overloaded methods in code since it reduce readability? | This is a relatively minor issue compared to many other bad readability practices you could be susceptible to, so I'd say it's mostly a matter of taste how you name your methods. With that said, if you are going to do something about it, I would follow this practice: Overload if... The methods obey nearly the same contract but simply operate on different input (imagine a phone operator who can look up your account by your personal tax ID number, your account number, or your name and birthday). This includes returning the same type of output . Use a different name if... The methods do substantially different things or return different output (like your case). You might consider using a different name if one accesses the database and one does not. In addition, if the type returned is different, I would also change the verb to indicate that: Car findById(long id);
List findAllByModel(String model); | {
"source": [
"https://softwareengineering.stackexchange.com/questions/132369",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/43961/"
]
} |
132,373 | I recently joined a company and it is my first job. When reading the code base, I felt that the code was not well written. It seemed to me that the code had most of the problems mentioned here and also seemed to have an Anemic Domain Model . There are no unit tests and they don't employ any code quality checking tools like findbugs or pmd. The problem I have is that the code is very difficult to understand. Maybe my conclusions are wrong because I am not that experienced. I need advice on whether to communicate the above facts to a superior or not. If I am to communicate, to whom(Tech Lead, Architect, Product Manager) and how? And if I do communicate will they take it badly since I'm a Junior and has no experience? | Ask any senior developer on your team. Don't tell them you think it's wrong. Instead, Tell them you don't understand the code and are curious why it is developed the way that it is. Ask them to explain the code to you. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/132373",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/46318/"
]
} |
132,385 | What is the minimal set of language features/structures that make it Turing-complete? | A Turing tarpit is a kind of esoteric programming language which strives to be Turing-complete while using as few elements as possible. Brainfuck is perhaps the best-known tarpit, but there are many. Iota and Jot are functional languages with two and three symbols, respectively, based on the SK(I) combinator calculus . OISC ( One Instruction Set Computer ) denotes a type of imperative computation that requires only one instruction of one or more arguments, usually “subtract and branch if less than or equal to zero”, or “reverse subtract and skip if borrow”. The x86 MMU implements the former instruction and is thus Turing-complete. In general, for an imperative language to be Turing-complete, it needs: A form of conditional repetition or conditional jump (e.g., while , if + goto ) A way to read and write some form of storage (e.g., variables, tape) For a lambda-calculus –based functional language to be TC, it needs: The ability to abstract functions over arguments (e.g., lambda abstraction, quotation) The ability to apply functions to arguments (e.g., reduction) There are of course other ways of looking at computation, but these are common models for Turing tarpits. Note that real computers are not universal Turing machines because they do not have unbounded storage. Strictly speaking, they are “bounded storage machines”. If you were to keep adding memory to them, they would asymptotically approach Turing machines in power. However, even bounded storage machines and finite state machines are useful for computation; they are simply not universal . Strictly speaking, I/O is not required for Turing-completeness; TC only asserts that a language can compute the function you want, not that it can show you the result. In practice, every useful language has a way of interacting with the world somehow. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/132385",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/46325/"
]
} |
132,491 | I'm working as a independent software developer for mobile applications. A customer asks me to develop a mobile app. So at the moment I'm calculating the time and effort to write an offer for this project. The app itself will only be used for a certain time as it is related to a certain event after that if will be useless. But the base functionality of the app will be reusable for other customers who want to have a similar app for their event. At the moment I'm wondering if it is OK to develop the app for the customer let him pay the development and reuse part of the source code for another customer's app? So what would be the best way for me to deal with this scenario? To whom belongs the source code of the app? Do I have to give the source code to the customer as they paid for the development? If I have to, can I still keep a copy of it and reuse it later? Do I have to ask the customer to reuse the code? Do I have to work with some kind of licensing model here. And let the first customer only pay a certain part of the development so I can reuse the code without any concerns? I hope I made my situation clear. I'm looking forward to you answers. | You should decide before you start the project, who will maintain ownership of the code. If they happily allow you to keep ownership then you're fine to use it in other projects. If they wish to take ownership after then it's a negotiating point. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/132491",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/3809/"
]
} |
132,505 | At what point should I begin upgrading our developer's machines to a later Windows release? From experience, Microsoft typically has an epic fail about every other O.S. (I am really not trying to start a debate about this, it is my perception let it be) i.e. 2000 was a stable and useful O.S, yet m.e. didn't do well, xp was a great O.S. that many businesses still use, Vista didn't do well (I know that the reason this one didn't do well wasn't really Microsoft's fault and that Vista and 7 are the same major revision), while 7 seems to be an excellent O.S. that will be around for a while--sort of like xp is now. Anyways, this makes me reluctant to upgrade our development machines to Windows 8. a. I don't want to cost our business a ton of money for an O.S. that
will only be used for a year or two. b. They also have to have linux dual boots, and I have read that Windows 8 and grub will not play well together. I do want our developers to develop in the latest environment and to have a leading edge in any technology they are developing with. I do want to stay ahead of--or at least with--the technology curve, yet I want it to make business sense. So in particular, should I upgrade their machines at this point?
In general, what calculus should I use for deciding this sort of thing? edit They do write desktop UI applications as well as ASP.NET applications. Also, I do make sure that they always have the latest release of Visual Studio. | Computers are not physical monolithical entities anymore, use virtual machines ! Your developers should be able to access different work environments as they need, and virtual machines are the perfect way to do so, you can : keep a legacy environnement easily accessible. have multiple, independent environments (ex: 1 environment per client) have test environments (ex: windows 8 dev preview). Any decent laptop nowadays can run a windows 7 VM on top of a windows 7 host environment. It's really nice to be able to switch environment as a developer. The backup/versioning possibilities are also a nice plus. If you have MSDN subscriptions, you should be able to keep the price of this kind setup not too high considering they are used for development. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/132505",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/30026/"
]
} |
132,563 | I'm reading "Dependency Injection in .NET" by Mark Seemann (it's fantastic, and must have) and the author often uses the word "seam". But I can't understand what it means.
Here is an example of using this word: Chapter 7 explains how to compose objects in various concrete frameworks
such as ASP.NET MVC, WPF, WCF, and so on. Not all frameworks support DI
equally well, and even among those that do, the ways they do it differ a lot. For
each framework, it can be difficult to identify the SEAM that enables DI in that
framework. However, once that SEAM is found, you have a solution for all applica-
tions that use this particular framework. In chapter 7, I have done this work
for the most common .NET application frameworks. Think of it as a catalog of
framework SEAMS. I would be gratefull for helping me with understanding this word. | I think the term originates from Michael Feathers Working Effectively with Legacy Code in which he explains a seam in software as a place where two parts of the software meet and where something else can be injected. The analogy is a seam in clothing: The place where two parts are stitched together. The piece on each side only touches the other right at the seam. Back to software: If you identify the seam you have identified the place where there is a well defined interface. That is what you can leverage in DI, since such an interface allows you to replace implementation without the rest of the software being able to tell (without cheating, anyway). | {
"source": [
"https://softwareengineering.stackexchange.com/questions/132563",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/46401/"
]
} |
132,622 | My team and I are rebuilding a site we developed around ten years ago, and we want to do it in Agile. So after I spent a lot of time reading (probably not enough) I am having trouble with the question how to divide work between developers. I'll be more specific and say that the site is divided to separate modules that doesn't have much integration between one another. What is the best / Most accepted way to divide the work between the developers? Giving each person a different module to work on. Assign all developers to the same module, and split the work by different parts of the module (UnitTesting, DAL and Mapping, Logics, UI) Assign all developers to the same module, and split the work by different logics (For example each developer is in-charge of a specific logic(probably a method in the BL) and It's UnitTesting, DAL and Mapping and UI... Or maybe something entirely different? | My team has been trying to go "agile" for few releases now, but being part of a large corporation hasn't exactly made it easy. I won't pretend like I have the answer, but I can share some of my observations. Dividing developers per module: You need to be careful because if people work too much in isolation, your team doesn't benefit from cross-sharing of skills and knowledge Planning meetings and daily stand ups can become incredibly dull to everyone if people focus too much on their own modules and don't see bigger picture. Once people get bored, they start checking out and you lose much of the benefit agile brings to the table You might end up with some components written really well, and other components, well... not so much. If people work in isolation, your senior guys won't be able to train the junior ones. Everyone works on the same module at the same time We tried that for one release, when management decided they will impose agile on the whole team and it will be completely their way. It as an absolute train wreck. We had a team of 9 developers deliver in a year what typically would've been done by 1 developer. (I might be exaggerating here but not by much). No one felt like there was any breathing room. Those that didn't care about software, felt right at home because being part of a larger pack, they just get diluted in the group. Those of us who had passion for software, felt absolutely stifled as there was no freedom to move or go outside the bounds what 9 people have agreed on. All meetings went forever to a point of me wanting to shoot myself. Too many people with an opinion in the same room forced to work on the same freakin' DLL. The horror. In the last release, we've decided to try something different First and foremost, break development group into smaller teams of 3-4 developers. Each team worked in relative isolation from each other but within the team people worked much more cohesively With this approach, stand ups are fast and planning meetings take 1-2 hours compared to solid 4 hours. Everyone feels engaged because each team only discusses what the developers on that team care about. Tech lead from each team talks to other tech leads periodically to make sure overall project is on track. Instead of making people "owner" of a specific module, we assigned areas of expertise to people, so when we first started the project it felt like people have their own module, but after several months, developers would start looking at each others code as areas started overlapping. Code reviews are essential. This was the second release where we had strict code review policy and everyone on the team loves them. Expert of a specific area is always on a code review when someone else modifies that code. With code reviews we have a ton of knowledge sharing and you can visible see the overall improvement of our teams' code quality. Also because code gets reviewed so often, when people go into someone else's area of expertise, chances are they've already seen the code at least few times already. Larger portion of each team is sucked into design review meetings, so even if they've never seen the code, everyone is familiar with general flow of all modules that their team is responsible for. We've done this for about 10 months and it kind of feels like we started with isolated module approach and morphed into everyone works on everything. But at the same time, no one feels like they are cramped or limited. And to make sure the guys still have sense of some authority, we left them as area experts, even though that's mostly a formality now. We've been doing that last thing, and although there's a ton of room for improvements, overall our entire team has been very happy and that says a lot, when we are part of giant corporation. One important thing that we got wrong the first 3 times we "went agile", is each one of those times people were told how to work and they were told what to work on. That's number one way to have your team completely lose interest in the project and then you are in real trouble. Instead, try the opposite. Tell the team, they can do whatever they want and as a manager/leader (if you are one, if not make your manager repeat these words), your job is to make sure they are as productive and happy as possible. Process is not a bad thing, but process should be there to help your team when it realizes it needs one, not the other way around. If some of your team members prefer to work in isolation, let them (to a degree). If they prefer to work in pairs, let them do that. Make sure to let your people pick their own work as much as you can. Lastly, and this is very important and is always overlooked. YOU WILL NOT GET THIS RIGHT (unless you are superman, or at least batman). Having regular retrospective meetings is extremely important. When we rolled out retrospectives, they were done by the book and it felt like yet another process you had to sit through. That's not what retrospective is for. It is for listening to your team, identifying areas that cause the most pain and fixing them so that everyone can move on with their work. Apparently software engineers in general like delivering products and features and the most important message retrospective meeting needs to communicate, is that it is solely for their benefit. You want to identify and tackle obstacles, starting with the biggest ones (or easiest ones, there's some kind of 2D map involved) and get them out of the way so your people get their work done. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/132622",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/46446/"
]
} |
132,691 | Is there an alternative to bits as the smallest unit of data? Something that won't be only 0 or 1, but actually hold many possible states in between? Wouldn't it be more natural to store floats like that? | Of course it is possible, both theoretically and practically. Theoretically, there are two classes of alternatives: digital number systems with a base other than 2 (in fact, the decimal system as we know it is one such system); and non-digital number systems. Mathematically speaking, we're talking about discrete vs. continuous domains. In practice, both options have been explored. Some of the early digital computers (e.g. ENIAC) employed decimal encodings rather than the now ubiquitous binary encoding; other bases, e.g. ternary, should be just as feasible (or infeasible). The esoteric programming language Malbolge is based on a theoretical ternary computer; while mostly satirical, there is no technical reason why it shouldn't work. Continuous-domain storage and processing was historically done on analog computers, where you could encode quantities as frequencies and / or amplitudes of oscillating signals, and you would then perform computations by applying all sorts of modulations to these signals. Today, quantum computing makes the theory behind continuous storage cells interesting again. Either way, the bit as a theoretical smallest unit of information still stands, as any alternative can encode more information than a single yes/no, and nobody has yet come up with a smaller theoretical unit (and I don't expect it to happen anytime soon). | {
"source": [
"https://softwareengineering.stackexchange.com/questions/132691",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/45639/"
]
} |
132,735 | So I have a relatively simple system. A mobile client creates records in a sqlite database that I would like to have synced to a remote SQL server (that is shared with other mobile clients) . So when I create a new record in the phone's sqlite table, I then push that change to my remote service through a RESTful API. The problem I'm having, is how do I order the primary keys so that there isn't collisions in the data (i.e. a record in the phone has the same primary key as a completely different record on the server). What is the usual "best practice for referencing the record on the client, and for referencing the same record on the server? | The normal practice is to structure your database with uniqueidentifier keys (sometimes called UUIDs or GUIDs). You can create them in two places without realistic fear of collision. Next, your Mobile app needs to sync "fact" tables from the server before you can create new rows. When you create new rows, you do it locally, and when you sync again, new rows are added to the server. You can do the same thing with updates and deletes too. To track inserts you need a Created timestamp on your rows. To track updates you need to track a LastUpdate timestamp on your rows. To track deletes you need a tombstone table. Note that when you do a sync, you need to check the time offset between the server and the mobile device, and you need to have a method for resolving conflicts. Inserts are no big deal (they shouldn't conflict), but updates could conflict, and a delete could conflict with an update. There are frameworks to handle this kind of thing, such as Microsoft Sync Framework . | {
"source": [
"https://softwareengineering.stackexchange.com/questions/132735",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/23136/"
]
} |
132,747 | Is this: public MyClass
{
public const string SomeString = "SomeValue";
} worse than this: public MyClass
{
public static string SomeString { get{ return "SomeValue";}}
} Both can be referenced the same way: if (someString == MyClass.SomeString)
... The second however, has the protection of being a property. But really how much better is this than a const? I have learned over and over the perils of having public fields. So when I saw some code using these constants on public fields, I immediately set about refactoring them to properties. But halfway through, I got to wondering what benefit it was to have the static properties over the constants. Any ideas? | In C# it's very bad for none of the reasons mentioned in this thread. Public constants in C# get baked into referencing assemblies. Meaning, if you have a SomeOtherClass in a separate assembly referencing SomeString in MyClass, the CIL generated for SomeOtherClass will contain a hardcoded "SomeValue" string. If you go to redeploy the dll that contains MyClass but not SomeOtherClass and change the const, SomeOtherClass won't contain what you think it will--it will contain the original value. If you're 100% positive it's a universal constant like Pi, go crazy; otherwise tread carefully. Here's a better explanation : https://stackoverflow.com/questions/55984/what-is-the-difference-between-const-and-readonly | {
"source": [
"https://softwareengineering.stackexchange.com/questions/132747",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/71/"
]
} |
132,798 | I am new to working with Windows Services. Although I have learnt to create Windows Services in VS2010 I would like to know some practical ways in which windows services could be used? I tried Googling with the current context in mind only to find more tutorials on how to create Windows Services. EDIT on Bounty Offer: All the answers are great but I was looking for more practical examples on windows services and their implications? This will help developers know when it is appropriate to use them with the case study. | A service runs in the background, even if no-one is signed on to the machine. Anything you can imagine wanting to do without relying on a person to start an app and click a button is a good candidate for a service. For example, monitoring a folder and whenever a file is written to it, process it in some way. Any "server" you can think of - web server, ftp server, mail server - is a service, and so are many background processes you may not often think of. Some things that were once written as services (backup files at 2am, send reminder emails at 3am etc) are probably better done today as scheduled tasks, which have tremendous flexibility on Windows 7 and up, but if the developer never learned them, or the system must support XP, you will also find services doing those sorts of tasks. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/132798",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/40327/"
]
} |
132,810 | Following the formal user story style: As <user> , I want <goal> so that <benefit> . Our team has found difficulties in expressing things where there is a desire by the system's owners to do something which negatively affects the user. As an arbitrary example, let's say owner wants to have the system charge customers every time they check their email. Following the formal style of user stories, you might write this as follows: As a customer, I want to be charged every time I check my email so that the system owner can increase their revenue. Obviously the customer has no desire to be charged; the story becomes jarring to read and the language is getting in the way of the facts. How could the requirement be written differently? | If paying money affected customers negatively, they wouldn't be using that service. Don't worry about this. Also, users don't (usually) pay money because they want to help out system owners, but because they want some service in exchange, so your example should really be like this: As a customer, I want to be charged every time I check my email so
that I can get service X in exchange. Also, user stories are written from the perspective of all user roles, not just end customers. Consider writing this one from the perspective of the system owner as another user role: As a system owner, I want customers to be charged every time they
check their email so that I increase my revenue. A general advice: Focus on the positive part of the user story and don't overthink it. They should be simple. If the user story is very negative, without a way to avoid it, then the problem is with the conception of the system, and in that case it doesn't really matter what you write on your cards. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/132810",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/8616/"
]
} |
132,852 | I can understand schedule pressure. You want to please your users, as they are the lifeblood of the company. However, it is also true that certain changes will make everything easier down the road. Unfortunately, management in my organization has an instinctive resistance to such changes and this resistance is so strong that it gets in the way of long-term improvements. For example, Apple recently introduced Automatic Reference Counting for iOS programs. This is a major improvement over the manual retain/release calls one previously had to use. The code is easier to write and easier to maintain. The changeover itself is likely to produce some crashes. But once those are worked out, the number of random weird crashes is likely to go down. I recently mentioned to my boss that I wanted to switch to automatic reference counting. His response was that he wanted to concentrate on visible improvements. It is likely that this response was in turn driven by pressure he is getting from above him - and probably right from the CEO. There are a lot of similar examples. The common thread is that something needs to be fixed but the short-term costs of the fix outweigh the short-term benefits, where "short term" is defined as "within the next few weeks." How should I handle the situation? EDIT: Thanks for the responses. Keep 'em coming. Because it is relevant to my situation, I should make it clear that my manager and the CEO are both programmers -- though the CEO may by now have forgotten what this is like. Apparently their programmer sides have been overwhelmed by other pressures. | You are really talking about technical debt . Maybe a metaphor would help your managers. I often compare the effect of technical debt in software to cooking in a dirty kitchen. If the sink and counters and stove are piled with dirty dishes and there is trash on the floor, it takes longer to make a meal. However, the fastest way to prepare the very next meal is to work around the mess. Cleaning the kitchen, and keeping it clean, will delay the next meal, but will improve the delivery of all subsequent meals. And just as the hungry person in the dining room can't see the messy kitchen, and won't understand why you want to clean up before starting to cook, your management can't see the mess in the code. You need to either show them the mess, or show the quality problems and delays that are caused by the mess. Perhaps you could also talk about urgent tasks and important tasks. When important tasks aren't done, then urgent tasks take longer and cost more. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/132852",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/46547/"
]
} |
132,945 | I am looking at buying Rubymine as I am doing a small amount of ruby, but a large amount of html5/javascript. I was going to get Webstorm as I do have a lot of pure html5/js based frameworks/apps that I am working on, however I then read that WebStorm/PhpStorm/Rubymine etc are all based on their IDEA framework, and made out like each framework contained the functionality of WebStorm anyway, other than a few features which were not there out of the box but could be added through plugins. The main features that interest me about WebStorm are: JS Unit testing from IDE JS Lint/Hint coverage within UI DOM/JS Refactoring/Intellisense Coffeescript support SVN/Git integration FTP and remote sync (although not as important as the rest) So given the above, would Rubymine provide the above functionality too? as I would rather have 1 IDE which I can do both in, than having 2 IDEs which have a lot of overlapping functionality. Is there any specific functionality which is ONLY within Webstorm but not in any of the other IDEs? | RubyMine has all the features of WebStorm. Note that because of the different release cycles some features may appear first in one IDE, but will be also available in all the other IDEs with the next update. There is no functionality specific to WebStorm that is not available in the other IDEs. See http://devnet.jetbrains.com/message/5466924?tstart=0 | {
"source": [
"https://softwareengineering.stackexchange.com/questions/132945",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/41512/"
]
} |
132,952 | Does anyone have any links to studies that show how noise affects the productivity of programmers? Specifically I would like to see how/if productivity rises when noise levels decrease. As pointed in comments , the nature of the programming workflow is such that you go in and out of focus all the time -- so it's likely to be affected by noise differently than other lines of work. The reason I think that this is programmer specific is that I am also interested in mathematics. In a noisy place, if I start thinking about maths, the noise goes away and I find myself lost in a world of pictures. In fact my favourite place to do maths was always The Copper Kettle cafe, a busy tourist place. For programming it's completely different. While programming I'm usually thinking verbally, and any talking whatsoever destroys my train of thought. I'm literally incapable of programming anywhere where there is audible conversation. I've talked to other programmers who don't even notice noise that disables me, and they say that they think mainly in pictures. Which is why I'm wondering whether there are any actual academic studies into whether programming is particularly noise-affected compared to say maths or lawyering. | The book Peopleware has several chapters that cover the subject. You can read a decent summary here . Studies led by Tom DeMarco & Timothy Lister showed statistically significant results about the correlation between noise and defects. Here is an interesting part of the summary: Workplace Quality and Product Quality - Companies that provide small and noisy workplaces explain away complaints as workers campaigning for the added status of bigger, more private space. To determine whether noise level had any correlation to work, we divided our sample into those who found the workplace acceptably quiet and those who didn't. Then, looking at workers within each group who completed the entire exercise without a single defect: > Workers who reported that their workplace was acceptably quiet before
the exercise were 1/3 more likely to deliver zero-defect work. As the noise level gets worse, this trend gets stronger: Zero-defect workers: => 66% reported noise level OK 1-or-more-defect workers: => 8% reported noise level OK A Discovery of Nobel Prize Significance - On February 3, 1984, in a study of 32,346 companies worldwide, the authors confirmed a virtually perfect inverse relationship between people density and dedicated floor space per person. If you're having trouble seeing why this matters, you're not thinking about noise. Noise is directly proportional to density, so halving the allotment of space per person can be expected to double the noise. Even if you managed to prove conclusively that a programmer could work in 30 sq. ft. without being hopelessly space-bound, you still wouldn't be able to conclude that 30 sq. ft. is adequate space. The noise in a 30 sq. ft matrix is more than triple the noise in a 100 sq. ft. matrix, which could make the difference between a plague of product defects and none at all. Check the summary, really, noise is one of the recurring subject in Peopleware. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/132952",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/20220/"
]
} |
132,977 | It has been less than a year since I joined my current company. Their majority of sales have come from a single product that has been alive since the last 10 years. However, there is minimal (if at all) documentation. Not only do the developers in the company struggle with the lack of documentation but also there is a high amount of turnover, causing everyone to lose their time. This is because experienced developers have left the company and there is less and less resources to communicate/brain storm with. Without getting into too much detail, I have suggested to the previous manager that there needs to be some sort of documentation (at least an Architecture document) that outlines the product. I also suggested using JavaDoc and other automatic documentation tools. These suggestions were responded to by slight smiles and statements of the sort "We do not have enough time", "We need short-term improvements right now" and even "The code itself should be the documentation" from the programmers themselves. I have already wasted enough time trying to find out if what I needed per requirement/bug had existed in this big (really) code base. I am looking for any suggestions that you might give regarding the need of documentation. Or, rather, if this is a lost case for this legacy system or organization. | I feel your pain. I've been in this exact position before. My suggestion is to lead by example. Start a wiki or document and start making notes. Make repeated references to this document you're putting together. If someone asks you a question, make a show of looking at the document for the answer first. If the question isn't there, add it to the section of unanswered questions. As it grows, share it with everyone. Add time to document to your maintenance tasks. You have to grow this organically. It needs a champion. I'm not aware of any developer-driven argument that will make a company that doesn't care about documentation suddenly flip that switch. Once it's in place, everyone will wonder how they got by without it. Sadly, once you move on, it will probably become obsolete and die. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/132977",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/46560/"
]
} |
132,993 | Can you have Java compiled straight into machine code? I want to do this so I have control over what platforms it's used on, and don't know C,C++ etc. | It appears that the GNU Compiler for Java can convert Java source code into either Java bytecode or machine code. It can also convert existing Java bytecode into machine code. However, the last news is from 2009, so I'm not sure how current it is and if it can handle the latest features of the Java language. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/132993",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/44608/"
]
} |
133,015 | When setting a value to a variable inside of a class most of the time we are presented with two options: private string myValue;
public string MyValue
{
get { return myValue; }
set { myValue = value; }
} Is there a convention that determines how we should assign values to variables inside of our classes? For example if I have a method inside the same class should I assign it using the property or using the private variable. I've seen it done both ways, so I was wondering if this is a choice or performance is a factor (minor, probably). | I would take it a step further, and bring it to 3 cases. Although there are variations on each, this is the rules I use the majority of the time when C# programming. In case 2&3, always go to the Property Accessor (not the field variable). And in case 1, you are saved from even having to make this choice. 1.) Immutable property (passed in to constructor, or created at construction time). In this case, I use a field variable, with a read-only property. I choose this over a private setter, since a private setter does not guarantee immutability. public class Abc
{
private readonly int foo;
public Abc(int fooToUse){
foo = fooToUse;
}
public int Foo { get{ return foo; } }
} 2.) POCO variable. A simple variable that can get/set at any public/private scope. In this case I would just use an automatic property. public class Abc
{
public int Foo {get; set;}
} 3.) ViewModel binding properties. For classes that support INotifyPropertyChanged, I think you need a private, backing field variable. public class Abc : INotifyPropertyChanged
{
private int foo;
public int Foo
{
get { return foo; }
set { foo = value; OnPropertyChanged("foo"); }
}
} | {
"source": [
"https://softwareengineering.stackexchange.com/questions/133015",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/22218/"
]
} |
133,051 | I have several classes that all inherit from a generic base class. The base class contains a collection of several objects of type T . Each child class needs to be able to calculate interpolated values from the collection of objects, but since the child classes use different types, the calculation varies a tiny bit from class to class. So far I have copy/pasted my code from class to class and made minor modifications to each. But now I am trying to remove the duplicated code and replace it with one generic interpolation method in my base class. However that is proving to be very difficult, and all the solutions I have thought of seem way too complex. I am starting to think the DRY principle does not apply as much in this kind of situation, but that sounds like blasphemy. How much complexity is too much when trying to remove code duplication? EDIT: The best solution I can come up with goes something like this: Base Class: protected T GetInterpolated(int frame)
{
var index = SortedFrames.BinarySearch(frame);
if (index >= 0)
return Data[index];
index = ~index;
if (index == 0)
return Data[index];
if (index >= Data.Count)
return Data[Data.Count - 1];
return GetInterpolatedItem(frame, Data[index - 1], Data[index]);
}
protected abstract T GetInterpolatedItem(int frame, T lower, T upper); Child class A: public IGpsCoordinate GetInterpolatedCoord(int frame)
{
ReadData();
return GetInterpolated(frame);
}
protected override IGpsCoordinate GetInterpolatedItem(int frame, IGpsCoordinate lower, IGpsCoordinate upper)
{
double ratio = GetInterpolationRatio(frame, lower.Frame, upper.Frame);
var x = GetInterpolatedValue(lower.X, upper.X, ratio);
var y = GetInterpolatedValue(lower.Y, upper.Y, ratio);
var z = GetInterpolatedValue(lower.Z, upper.Z, ratio);
return new GpsCoordinate(frame, x, y, z);
} Child class B: public double GetMph(int frame)
{
ReadData();
return GetInterpolated(frame).MilesPerHour;
}
protected override ISpeed GetInterpolatedItem(int frame, ISpeed lower, ISpeed upper)
{
var ratio = GetInterpolationRatio(frame, lower.Frame, upper.Frame);
var mph = GetInterpolatedValue(lower.MilesPerHour, upper.MilesPerHour, ratio);
return new Speed(frame, mph);
} | In a way, you answered your own question with that remark in the last paragraph: I am starting to think the DRY principle does not apply as much in
this kind of situation, but that sounds like blasphemy . Whenever you find some practice not really practical for solving your problem, don't try to use that practice religiously (word blasphemy is kind of a warning for this). Most practices have their whens and whys and even if they cover 99% of all the possible cases, there's still that 1% where you may need a different approach. Specifically, with regard to DRY , I also found that sometimes it is actually better to even have several pieces of duplicated but simple code than one giant monstrosity that makes you feel sick when you look at it. That being said, existence of these edge cases should not be used as an excuse for sloppy copy&paste coding or complete lack of reusable modules. Simply, if you have no idea how to write a both generic and readable code for some problem in some language, then it's probably less bad to have some redundancy. Think of whoever has to maintain code. Would they more easily live with redundancy or obfuscation? A more specific advice about your particular example. You said that these calculations were similar yet slightly different . You might want to try breaking up your calculation formula to smaller subformulas and then have all your slightly different calculations call these helper functions to do the sub calculations. You'd avoid the situation where every calculation depends on some over-generalized code and you'd still have some level of reuse. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/133051",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/7935/"
]
} |
133,156 | First, some background, we are in the process of moving all of our project teams over to using git and are in the process of laying down the guidelines for how the repositories should be organized so that certain branches can also be monitored for continuous integration and automatic deployment to the testing servers. Currently there are two models that are developing: Heavily influenced by the nvie.com article on successful branching with the master branch representing the most stable code, a development branch for the bleeding edge code, and an integration branch for code that is ready for QA testing. An alternate model in which the master branch represents the bleeding edge development code, an integration branch for code that is ready for QA testing, and a production branch for the stable code that is ready for deployment. At this point, it is partly a matter of semantics in regards to what the master branch represents, but is doing active development on the master branch actually a good practice or is it not really that relevant? | The only real defining feature of the master branch is that it's the default for some operations. Also, branch names only have meaning within a specific repository. My master might point to your development , for example. Also, a master branch is not even required, so if there's any confusion about which branch it should be, my advice is usually to leave it out altogether. However, in my opinion, the best way to think of it is as the default for pushing to. Most any online tutorials your developers read are going to assume that. So, it makes a lot of sense to have master be whatever branch is most often pushed to. Some people think of it as the pristine copy that is untouchable to developers except after the strictest of scrutiny, but using it that way removes a lot of the helpful defaults git provides. If you want that kind of pristine branch, I would put it in a completely separate repository that only some people can write to. Edit : This question is still getting attention after several years. In that time, the "master should be the pristine tested copy" theory has come to dominate, especially when using GitHub. So while git is still a very flexible version control system, and my original answer still has some merit if your needs are somewhat atypical, in general you should today be going with the model people expect, which is to develop in feature branches and pull request into master, merging only when it has been tested and reviewed. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/133156",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/2471/"
]
} |
133,285 | I wonder if there is any reason - or if it is just an accident of history - that there are no !> and !< operators in most programming languages? a >= b (a greater OR equals b) could be written as !(a < b) (a NOT lesser b) , that equals a !< b . This question struck me when I was in the middle of coding my own expression tree builder. Most programming languages have a != b operator for !(a=b) , so why no !> and !< ? UPDATE: !< (not lesser) is easier to pronounce than >= (greater or equals) !< (not lesser) is shorter to type than >= (greater or equals) !< (not lesser) is easier to understand* than >= (greater or equals) *because OR is binary operator you brain need to operate two operands (grater, equals), while NOT is unary operator and you brain need to operate only with one operand (lesser). | The D programming language and DMC's extension to C and C++ did support these operators (all 14 combinations of them), but interestingly, D is going to deprecate these operators , mainly because what exactly is a !< b ? It is a>=b || isNaN(a) || isNaN(b) . !< is not the same as >= , because NaN !< NaN is true while NaN >= NaN is false. IEEE 754 is hard to master, so using a !< b to will just cause confusion over NaN handling — you can search for such operators in Phobos (D's standard library), and quite a number of use has comments beside it to remind the readers NaN is involved, therefore, few people will use it, even if such operators exist like in D, and one have to define 8 more tokens for these seldomly used operators, which complicates the compiler for little benefit, and without those operators, one could still use the equivalent !(a < b) , or if one likes to be explicit, a >= b || isNaN(a) || isNaN(b) , and they are easier to read. Besides, the relations (≮, ≯, ≰, ≱) are seldomly seen in basic math, unlike != (≠) or >= (≥), so it's hard to understand for many people. These are probably also the reasons why most languages do not support them. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/133285",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/27566/"
]
} |
133,302 | I was just watching the "Going Native 2012" streams and I noticed the discussion about std::shared_ptr . I was a bit surprised to hear Bjarne's somewhat negative view on std::shared_ptr and his comment that it should be used as a "last resort" when an object's life-time is uncertain (which I believe, according to him, should infrequently be the case). Would anyone care to explain this in a bit more depth? How can we program without std::shared_ptr and still manage object life-times in a safe way? | If you can avoid shared ownership then your application will be simpler and easier to understand and hence less susceptible to bugs introduced during maintenance. Complex or unclear ownership models tend to lead to difficult to follow couplings of different parts of the application through shared state that may not be easily trackable. Given this, it is preferable to use objects with automatic storage duration and to have "value" sub-objects. Failing this, unique_ptr may be a good alternative with shared_ptr being - if not a last resort - some way down the list of desirable tools. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/133302",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/6919/"
]
} |
133,381 | Most programming languages appear to be designed to not allow one to declare an identifier that starts with a number. I was just curious to know the reason. I have already searched the web, but couldn't find a satisfactory explanation. | In C/C++, a number followed by a letter is considered to be a numeric constant and the string that follows, qualifies the type of the constant. So for example (these are VC++, not sure how standard they are): 0 - signed integer 0l - signed long integer 0u - unsigned integer 0i64 - 64 bit signed integer So a) it is easier for the lexer as Daniel said but also b) it makes an explicit distinction since 0y might be a variable but 0u would never be. Plus other qualifiers, like "i64" were added way later than "l" or "u" and they want to keep the option open of adding more if needed. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/133381",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/6873/"
]
} |
133,404 | In object-oriented programming, there is of course no exact rule on the maximum length of a method , but I still found these two quotes somewhat contradicting each other, so I would like to hear what you think. In Clean Code: A Handbook of Agile Software Craftsmanship , Robert Martin says: The first rule of functions is that they should be small. The second
rule of functions is that they should be smaller than that. Functions
should not be 100 lines long. Functions should hardly ever be 20 lines
long. and he gives an example from Java code he sees from Kent Beck: Every function in his program was just two, or three, or four lines
long. Each was transparently obvious. Each told a story. And each led
you to the next in a compelling order. That’s how short your functions
should be! This sounds great, but on the other hand, in Code Complete , Steve McConnell says something very different: The routine should be allowed to grow organically up to 100-200 lines,
decades of evidence say that routines of such length no more error
prone then shorter routines. And he gives a reference to a study that says routines 65 lines or long are cheaper to develop. So while there are diverging opinions about the matter, is there a functional best-practice for you? | Functions should normally be short, between 5-15 lines is my personal "rule of thumb" when coding in Java or C#. This is a good size for several reasons: It fits easily on your screen without scrolling It's about the conceptual size that you can hold in your head It's meaningful enough to require a function in its own right (as a standalone, meaningful chunk of logic) A function smaller than 5 lines is a hint that you are perhaps breaking the code up too much (which makes it harder to read / understand if you need to navigate between functions). Either that or your're forgetting your special cases / error handling! But I don't think it is helpful to set an absolute rule, as there will always be valid exceptions / reasons to diverge from the rule: A one-line accessor function that performs a type cast is clearly acceptable in some situations. There are some very short but useful functions (e.g. swap as mentioned by user unknown) that clearly need less than 5 lines. Not a big deal, a few 3 line functions don't do any harm to your code base. A 100-line function that is a single large switch statement might be acceptable if it is extremely clear what is being done. This code can be conceptually very simple even if it requires a lot of lines to describe the different cases. Sometimes it is suggested that this should be refactored into separate classes and implemented using inheritance / polymorphism but IMHO this is taking OOP too far - I'd rather only have one big 40-way switch statement than 40 new classes to deal with, in addition to a 40-way switch statement to create them. A complex function might have a lot of state variables that would get very messy if passed between different functions as parameters. In this case you could reasonably make an argument that the code is simpler and easier to follow if you keep everything in a single large function (although as Mark rightly points out this could also be a candidate for turning into a class to encapsulate both the logic and state). Sometimes smaller or larger functions have performance advantages (perhaps because of inlining or JIT reasons as Frank mentions). This is highly implementation dependent, but it can make a difference - make sure you benchmark! So basically, use common sense , stick to small function sizes in most instances but don't be dogmatic about it if you have a genuinely good reason to make an unusually big function. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/133404",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/46851/"
]
} |
133,506 | I'm self-learning iOS development through the iTunes U CS193p course, and I often find myself stuck. I've been trying to get unstuck myself, but it might take me hours and hours to figure out what I'm doing wrong, be it missing a method or not really getting a whole concept like delegation. I'm worried that I might be wasting too much time, and I'd be better off going to Stack Overflow shortly after I get stuck so I can move on. In your experience, does quickly asking on Stack Overflow hamper the learning process or improve it? | When I am working with new developers, I encourage them to come ask questions after five or ten minutes where they are not making progress. That has two benefits: the first is that they can get help without too much time spent staring at a problem, but they only ask when they are not getting somewhere. If they are learning - even on something that isn't ultimately the answer - they are much more likely to usefully retain that information. The second is that after about that much time they have to explain the problem to someone else. That solves a huge proportion of problems, because going through it end-to-end in order means you can spot the thing that you missed in your earlier work. Since it sounds like you are doing this alone, try turning to a stuffed toy, or the clock, or the wall, and asking that about the problem. Explain it as you would to a person, and see if that fixes things. If it doesn't, and you are not making progress, ask someone. Spending more than five or ten minutes stuck is a waste of your time - unless you go on to do something else, then come back to the problem with a fresh mind. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/133506",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/46295/"
]
} |
133,560 | Reading Eric Lippert's article on exceptions was definitely an eye opener on how I should approach exceptions, both as the producer and as the consumer. However, I'm still struggling to define a guideline regarding how to avoid throwing vexing exceptions. Specifically: Suppose you have a Save method that can fail because a) Somebody else modified the record before you , or b) The value you're trying to create already exists . These conditions are to be expected and not exceptional, so instead of throwing an exception you decide to create a Try version of your method, TrySave, which returns a boolean indicating if the save succeeded. But if it fails, how will the consumer know what was the problem? Or would it be best to return an enum indicating the result, kind of Ok/RecordAlreadyModified/ValueAlreadyExists? With integer.TryParse this problem doesn't exist, since there's only one reason the method can fail. Is the previous example really a vexing situation? Or would throwing an exception in this case be the preferred way? I know that's how it's done in most libraries and frameworks, including the Entity framework. How do you decide when to create a Try version of your method vs. providing some way to test beforehand if the method will work or not? I'm currently following these guidelines: If there is the chance of a race condition, then create a Try version. This prevents the need for the consumer to catch an exogenous exception. For example, in the Save method described before. If the method to test the condition pretty much would do all that the original method does, then create a Try version. For example, integer.TryParse(). In any other case, create a method to test the condition. | Suppose you have a Save method that can fail because a) Somebody else modified the record before you, or b) The value you're trying to create already exists. These conditions are to be expected and not exceptional, so instead of throwing an exception you decide to create a Try version of your method, TrySave, which returns a boolean indicating if the save succeeded. But if it fails, how will the consumer know what was the problem? Good question. The first question that comes to my mind is: if the data is already there then in what sense did the save fail ? It sure sounds like it succeeded to me. But let's assume for the sake of argument that you really do have many different reasons why an operation can fail. The second question that comes to my mind is: is the information you wish to return to the user actionable ? That is, are they going to make some decision based on that information? When the "check engine" light comes on, I open up the hood, verify that there is an engine in my car that is not on fire, and take it to the garage. Of course at the garage they have all kinds of special purpose diagnostic equipment that tells them why the check engine light is on, but from my perspective, the warning system is well designed. I do not care whether the problem is because the oxygen sensor is recording an abnormal level of oxygen in the combustion chamber, or because the idle speed detector is unplugged, or whatever. I'm going to take the same action, namely, let someone else figure this out . Does the caller care why the save failed? Are they going to do anything about it, other than either give up or try again? Let's assume for the sake of argument that the caller really is going to take different actions depending on the reason why the operation failed. The third question that comes to mind is: is the failure mode exceptional ? I think you might be confusing possible with unexceptional . I would think of two users attempting to modify the same record at the same time as an exceptional-but-possible situation, not a common situation. Let's assume for the sake of argument that it is unexceptional. The fourth question that comes to mind is: is there a way to reliably detect the bad situation ahead of time? If the bad situation is in my "exogenous" bucket, then, no. There's no way to reliably say "did another user modify this record?" because they might modify it after you ask the question . The answer is stale as soon as it is produced. The fifth question that comes to mind is: is there a way to design the API so that the bad situation can be prevented? For example, you could make the "save" operation require two steps. Step one: acquire a lock on the record being modified. That operation either succeeds or fails and so can return a Boolean. The caller can then have a policy about how to deal with failure: wait a while and try again, give up, whatever. Step two: once the lock is acquired, do the save and release the lock. Now the save always succeeds and so there is no need to worry about any kind of error handling. If the save fails, that is truly exceptional. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/133560",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/46928/"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.